Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weed Classification #408

Merged
merged 1 commit into from
Dec 29, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
119 changes: 119 additions & 0 deletions Weed Classification/Dataset/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
# Weed Classification using DL

## PROJECT TITLE

Weed Detection using Deep Learning

## GOAL

To identify the weed image.

## DATASET

The link for the dataset used in this project: https://www.kaggle.com/datasets/imsparsh/deepweeds
It has 9 classes of Classification

## EDA:
![EDA](../Images/EDA1.png)
![Dataset Sample](../Images/Input.png)

## DESCRIPTION

This project aims to identify the weed name using Deep Learning.

## WHAT I HAD DONE

1. Data collection: From the link of the dataset given above using TensorflowDataset.
2. Data preprocessing: Preprocessed the image according to the requirement of the model.
3. Model selection: Densenet and Mobilnet V2 with a added Dense Classification Layer
4. Comparative analysis: Compared the accuracy score of all the models.


## MODELS SUMMARY

Model: "model" Densenet
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 224, 224, 3 0 []
)]

zero_padding2d (ZeroPadding2D) (None, 230, 230, 3) 0 ['input_1[0][0]']

conv1/conv (Conv2D) (None, 112, 112, 64 9408 ['zero_padding2d[0][0]']
)

conv1/bn (BatchNormalization) (None, 112, 112, 64 256 ['conv1/conv[0][0]']
)

conv1/relu (Activation) (None, 112, 112, 64 0 ['conv1/bn[0][0]']
)

zero_padding2d_1 (ZeroPadding2 (None, 114, 114, 64 0 ['conv1/relu[0][0]']
D) )

pool1 (MaxPooling2D) (None, 56, 56, 64) 0 ['zero_padding2d_1[0][0]']

conv2_block1_0_bn (BatchNormal (None, 56, 56, 64) 256 ['pool1[0][0]']
ization)
...
Total params: 7,333,961
Trainable params: 380,105
Non-trainable params: 6,953,856

Model: "sequential_1" Mobilenet
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
mobilenetv2_1.00_224 (Funct (None, 8, 8, 1280) 2257984
ional)

global_average_pooling2d (G (None, 1280) 0
lobalAveragePooling2D)

dense_3 (Dense) (None, 256) 327936

dropout_1 (Dropout) (None, 256) 0

dense_4 (Dense) (None, 9) 2313

=================================================================
Total params: 2,588,233
Trainable params: 330,249
Non-trainable params: 2,257,984
_________________________________________________________________

## LIBRARIES NEEDED

The following libraries are required to run this project:

- matplotlib
- tensorflow
- keras
- PIL

## EVALUATION METRICS

The evaluation metrics I used to assess the models:

- Accuracy
- Loss
- Confusion Matrix

It is shown using Confusion Matrix in the Images folder

## RESULTS
Results on Val dataset:
For Mobilnet:
Accuracy:83%
loss: 0.47

For Model-2:
Accuracy:70%
loss: 0.82


## CONCLUSION
Based on results we can draw following conclusions:

1.The densenet model worked better than the mobilenet model
Binary file added Weed Classification/Images/Accuracy.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Weed Classification/Images/Confusion-Matrix.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Weed Classification/Images/EDA1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Weed Classification/Images/Input.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Weed Classification/Images/Metrics.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Weed Classification/Images/Model-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Weed Classification/Images/Model-2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Weed Classification/Images/Output_Classes.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions Weed Classification/Model/weed-classification.ipynb

Large diffs are not rendered by default.

119 changes: 119 additions & 0 deletions Weed Classification/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,119 @@
# Weed Classification using DL

## PROJECT TITLE

Weed Detection using Deep Learning

## GOAL

To identify the weed image.

## DATASET

The link for the dataset used in this project: https://www.kaggle.com/datasets/imsparsh/deepweeds
It has 9 classes of Classification

## EDA:
![EDA](Images/EDA1.png)
![Dataset Sample](Images/Input.png)

## DESCRIPTION

This project aims to identify the weed name using Deep Learning.

## WHAT I HAD DONE

1. Data collection: From the link of the dataset given above using TensorflowDataset.
2. Data preprocessing: Preprocessed the image according to the requirement of the model.
3. Model selection: Densenet and Mobilnet V2 with a added Dense Classification Layer
4. Comparative analysis: Compared the accuracy score of all the models.


## MODELS SUMMARY

Model: "model" Densenet
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 224, 224, 3 0 []
)]

zero_padding2d (ZeroPadding2D) (None, 230, 230, 3) 0 ['input_1[0][0]']

conv1/conv (Conv2D) (None, 112, 112, 64 9408 ['zero_padding2d[0][0]']
)

conv1/bn (BatchNormalization) (None, 112, 112, 64 256 ['conv1/conv[0][0]']
)

conv1/relu (Activation) (None, 112, 112, 64 0 ['conv1/bn[0][0]']
)

zero_padding2d_1 (ZeroPadding2 (None, 114, 114, 64 0 ['conv1/relu[0][0]']
D) )

pool1 (MaxPooling2D) (None, 56, 56, 64) 0 ['zero_padding2d_1[0][0]']

conv2_block1_0_bn (BatchNormal (None, 56, 56, 64) 256 ['pool1[0][0]']
ization)
...
Total params: 7,333,961
Trainable params: 380,105
Non-trainable params: 6,953,856

Model: "sequential_1" Mobilenet
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
mobilenetv2_1.00_224 (Funct (None, 8, 8, 1280) 2257984
ional)

global_average_pooling2d (G (None, 1280) 0
lobalAveragePooling2D)

dense_3 (Dense) (None, 256) 327936

dropout_1 (Dropout) (None, 256) 0

dense_4 (Dense) (None, 9) 2313

=================================================================
Total params: 2,588,233
Trainable params: 330,249
Non-trainable params: 2,257,984
_________________________________________________________________

## LIBRARIES NEEDED

The following libraries are required to run this project:

- matplotlib
- tensorflow
- keras
- PIL

## EVALUATION METRICS

The evaluation metrics I used to assess the models:

- Accuracy
- Loss
- Confusion Matrix

It is shown using Confusion Matrix in the Images folder

## RESULTS
Results on Val dataset:
For Mobilnet:
Accuracy:83%
loss: 0.47

For Model-2:
Accuracy:70%
loss: 0.82


## CONCLUSION
Based on results we can draw following conclusions:

1.The densenet model worked better than the mobilenet model
Loading