Image Classification
Keras
litav commited on
Commit
41844b4
1 Parent(s): 8521669

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -3
README.md CHANGED
@@ -1,3 +1,52 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ **Real art vs AI-Generated art image classification**
5
+
6
+ This project provides a ResNet50 pre-trained model for classifying images as either 'real art' or 'fake art'.
7
+ ResNet50 is a deep convolutional neural network with 50 layers, known for its "residual connections" that help mitigate the vanishing gradient problem.
8
+ It allows training of very deep networks by adding shortcut connections that skip one or more layers, making it highly effective for image classification tasks.
9
+ Our goal is to accurately classify the source of the image with at least 85% accuracy and achieve at least 80% in the Recall test.
10
+
11
+ *Installation instructions*
12
+
13
+ The following libraries or packages are required: numpy, pandas, tensorflow, keras, matplotlib, sklearn, cv2.
14
+ We prepare the data for the model by sorted the images into 2 types of folders which are divided equally(real art- labeled as 0, fake art- labeled as 1).
15
+ Our ResNet50 model is based on 2,800 images that have been resized and normalized, the files formats is PNG‬, JPG‬.
16
+ The images are divided into a training set that contains 90% from data and a testing set that contains the remaining 10%.
17
+
18
+ *ResNet50 model architecture*
19
+
20
+ The model is pre-trained on 'ImageNet' that contains a large dataset of more than millions images.
21
+ It applies transfer learning, freezing initial layers of ResNet50, and training only the final layers.
22
+ The final layer, which makes the predictions, is a binary classification layer that uses a sigmoid activation function.
23
+
24
+ *Training Details*
25
+
26
+ The model is trained using binary cross-entropy loss and the Adam optimizer.
27
+ The model validates itself during training using 20% of the training data as validation, independent of the test data, to monitor performance and avoid overfitting.
28
+ The model is trained for 5 epochs with a batch size of 32 and employs 4-fold cross-validation to ensure robust performance.
29
+ During each fold, the model's weights are saved after training, allowing for the reuse of the best-performing weights.
30
+
31
+ *Performance Evaluation*
32
+
33
+ After training, the model is evaluated on the test set.
34
+ The following metrics are used to measure performance:
35
+ Accuracy: The percentage of correct classifications.
36
+ Precision, Recall, F1-Score: For evaluating the model’s classification ability on both real art and AI-generated art images.
37
+ Confusion Matrix: Provides insights into classification performance. Displays true positives, false positives, true negatives, and false negatives.
38
+
39
+ *To run the project*
40
+
41
+ Place the images in the respective training and testing folders.
42
+ Preprocess the images by resizing and normalizing them.
43
+ Train the model using the provided code.
44
+ Evaluate the model on the test set.
45
+
46
+ *Visualization results*
47
+
48
+ Confusion Matrix: To visualize the classification performance.
49
+ Training and Validation Metrics: Plots for accuracy and loss over the epochs.
50
+
51
+
52
+