Crud Implemented Using Django And Ajax

Last Updated on May 3, 2021

About

Here CRUD operations are implemented using Django and Ajax.

More Details: CRUD implemented using Django and AJAX

Submitted By


Share with someone who needs it

Image Classification Using Machine Learning

Last Updated on May 3, 2021

About

This is a prototype that shows the given specific image will belong to which category. Here any images can be taken to classify the difference. The main theme is to predict that the given image will belong to which category we had considered.

In this prototype I downloaded images of three different dog breeds named Doberman, golden retriever and shihtzu. The first step is to preprocess data which basically means converting the images into an numpy array and this process named as flattening the image. This numpy array should be the input of the image.


After preprocessing the data, the next step is to check the best suitable parameters for the machine learning algorithm. After getting the parameters, I passed them into the machine learning algorithm as arguments and fit the model. From Sklearn import classification report, accuracy score, confusion matrix which helps us to get brief understanding about our model. The model can be loaded into file using pickle library.


Now the last step is to predict the output. For this I took a input field which takes a URL as an input. The URL should be the image of the dog for which the output is predicted. In the same way we have to flatten the image into a numpy array and predict output for that. The output will show the predicted output that is which breed that the dog belongs to and the image we are checking the output for.


The main theme of this project is to train the computer to show the difference between different classes considered.

More Details: Image Classification using Machine learning

Submitted By


Breast Cancer Analysis And Prediction Using Ml

Last Updated on May 3, 2021

About

Project EDA-

Done by using module called Pandas Profiling


Data Set Information:

Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. n the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].

This database is also available through the UW CS ftp server: ftp ftp.cs.wisc.edu cd math-prog/cpo-dataset/machine-learn/WDBC/

Also can be found on UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29


Attribute Information:

  1. ID number
  2. Diagnosis (M = malignant, B = benign) 3-32)

Ten real-valued features are computed for each cell nucleus:

a) radius (mean of distances from center to points on the perimeter) b) texture (standard deviation of gray-scale values) c) perimeter d) area e) smoothness (local variation in radius lengths) f) compactness (perimeter^2 / area - 1.0) g) concavity (severity of concave portions of the contour) h) concave points (number of concave portions of the contour) i) symmetry j) fractal dimension ("coastline approximation" - 1)

The mean, standard error and "worst" or largest (mean of the three largest values) of these features were computed for each image, resulting in 30 features. For instance, field 3 is Mean Radius, field 13 is Radius SE, field 23 is Worst Radius.

All feature values are recoded with four significant digits.

Missing attribute values: none

Class distribution: 357 benign, 212 malignant

More Details: Breast Cancer Analysis and Prediction Using ML

Submitted By


Dog And Cat Image Classification

Last Updated on May 3, 2021

About

Dog and cat image classification

The project classifies an image into a dog or a cat. The model has been built by using Convolutional Neural Network or also known as CNN. CNN is a part of deep learning which deals with analysing images. It is widely used for image recognition and classification. This project was developed by using Python. Python is an interpreted, high-level and general-purpose programming language. Python was implemented on Jupyter Notebook.

Libraries and Functions used-

Various Python libraries were used while developing the ML model. The tools used were:

1. tensorflow- It focusses on training of neural networks

2. load_model- This library is used to load a model and construct it identically

3. tkinter- It is a python GUI toolkit

4. PIL- It is Python Image Library that supports in doing operations with images

5. Filedialog- It is used for selecting a file/directory

6. Playsound- It is used for playing audios

7. ImageDataGenerator- It is a class of Keras library used for real-time data augmentation

8. Flow_from_directory- It is an image augmentation tool

9. keras Preprocessing- It is the data preprocessing module of keras which provides utilites for working with image data.

10. load_img- It loads the image in PIL format.

11. img_to_array- It changes the image into a numpy array.

12. expand_dim- It expands the dimension to add an extra dimension for a batch of only one image with axis=0.

In this neural network 2 activation functions were used-

1. ReLu

2. Sigmoid

The methods followed were:

1.     Pre-processing of data

1.1  Training data

1.2  Testing data

2.     Building CNN

2.1  Adding the first convolution layer

2.2  Pooling

2.3  Adding the second convolution layer

2.4  Flattenng

2.5  Full connection

2.6  Output layer

The accuracy of last(50) epoch was 97%

Prediction Function

This function loads the ML model and take the image input given by the user and then pre-process it. Later the pre-processed image goes as an input to ML model which gives the prediction. For our output, this code plays a sound corresponding to the prediction.

Model

The final page asks the user to select an image from the local computer. The tab’s name is ‘Image Classifier’.

Once the user selects the image, the model successfully predicts whether the image is of a dog or a cat. The model also plays a sound stating about the prediction.

More Details: Dog and Cat Image classification

Submitted By


Social Distance Monitoring System(Python And Opencv)

Last Updated on May 3, 2021

About

Social distancing is one of the community mitigation measures that may be recommended during Covid-19 pandemics. Social distancing can reduce virus transmission by increasing physical distance or reducing frequency of congregation in socially dense community settings, such as ATM,Airport Or market place .

Covid-19 pandemics have demonstrated that we cannot expect to contain geographically the next influenza pandemic in the location it emerges, nor can we expect to prevent international spread of infection for more than a short period. Vaccines are not expected to be available during the early stage of the next pandemic (1), a Therefore, we came up with this system to limit the spread of COVID via ensuring social distancing among people. It will use cctv camera feed to identify social distancing violations

We are first going to apply object detection using a YOLOv3 model trained on a coco dataset that has 80 classes. YOLO uses darknet frameworks to process incoming feed frame by frame. It returns the detections with their IDs, centroids, corner coordinates and the confidences in the form of multidimensional ndarrays. We receive that information and remove the IDs that are not a “person”. We will draw bounding boxes to highlight the detections in frames. Then we use centroids to calculate the euclidean distance between people in pixels. Then we will check if the distance between two centroids is less than the configured value then the system will throw an alert with a beeping sound and will turn the bounding boxes of violators to red.



More Details: Social Distance Monitoring System(Python And OpenCV)

Submitted By


Human Computer Interaction Using Iris,Head And Eye Detection

Last Updated on May 3, 2021

About

HCI stands for the human computer interaction which means the interaction between the humans and the computer.

We need to improve it because then only it would improve the user interaction and usability. A richer design would encourage users and a poor design would keep the users at bay.

We also need to design for different categories of people having different age,color,gender etc. We need to make them accessible to older people.

It is our moral responsibility to make it accessible to disabled people.

So this project tracks our head ,eye and iris to detect the eye movement by using the viola Jones algorithm.But this algorithm does not work with our masks on as it calculated the facial features to calculate the distance.

It uses the eucledian distance to calculate the distance between the previous frame and the next frame and actually plots a graph.

It also uses the formula theta equals tan inverse of b/a to calculate the deviation.

Here we are using ANN algorithm because ANN can work with incomplete data. Here we are using constructive or generative neural networks which means it starts capturing our individual images at the beginning to create our individual patterns and track the eye.

Here we actually build the neural network and train it to predict

Finally we convert it to mouse direction and clicks and double clicks on icons and the virtual keyboard.

As a contributing or moral individuals it is our duty to make devices compatible with all age groups and differently abled persons.

More Details: Human Computer Interaction using iris,head and eye detection

Submitted By