Titanic :Machine Learning From Disaster From Kaggle

Last Updated on May 3, 2021

About

  • Developed a Machine learning model to predict the chances of survival of a passenger on titanic
  • Solved the problem using Random Forest model with an accuracy of 92%
More Details: Titanic :Machine learning from disaster from Kaggle

Submitted By


Share with someone who needs it

Password Checker

Last Updated on May 3, 2021

About

This can be the most secure way for you to check if your password has ever been hacked. This is a password checker which checks whether this password has been used before or not. and if it has been used then the number of times it has been found. It makes it easy for you to understand that your password is strong enough to keep or is it too light. Its working is pretty simple, in my terminal i write the python file with my code checkmypass.py followed by the password to check if its ever been hacked , its gonna check as many passwords as we list in the terminal. I have used passwords API (pawned password) and SHAH1 (algorithm) to hash the given password into some complex output which is hard to hack also only the first five characters of hash version of password has been used for super privacy so that the real one is safe. The concept of k-anonymity is used  it provides privacy protection by guaranteeing that each record relates to at least k individuals even if the released records are directly linked (or matched) to external information. I have added this on my Github repository.

password-checker/checkmypass.py at main · THC1111/password-checker (github.com)

THIS CAN BE REALLY EFFECTIVE FOR SOME PERSONEL USE.

More Details: PASSWORD CHECKER

Submitted By


Emotional Analysis Based Content Recommendation System

Last Updated on May 3, 2021

About

As the saying goes, “We are what we see”; the content we see may have an adverse effect on our behavior sometimes. Especially in a country like India, where numerous films and TV series are highly prominent, there are great chances of watching explicit or disturbing content randomly. This may have adverse effects on behavior of people, especially children. And we also know “Prevention is better than cure”. Preventing inappropriate content from going online can be more effective than banning them after release.

To achieve this, we aim to create a content filtering and recommendation system that either recommends a film or TV series or alerts a user with a warning message saying it’s not recommended to watch. Netflix or any other Over-the-top (OTT) platforms perform a filtering process before they buy digital rights for any content. This is where our tool comes handy. It detects absurd or hard emotion inducing content with the help of human emotions. Through this project we aim to create a content detector based on human emotion recognition. We will project scenes to test audience and capture their live emotions.

Then we use “Facebook Deep Face”, a pre-defined CNN based face recognition and facial emotion analysis model to identify faces and analyze their emotions. We use “Deep Learning” methods to recognize facial expressions and then make use of Circumplex Model proposed by James Russell to classify emotions based on arousal and valence values. Based on majority emotion that is projected by audience we would either recommend or not recommend the content for going on-air. This system prevents inappropriate content from going on-air

More Details: Emotional Analysis Based content recommendation system

Submitted By


Dimensionality Reduction

Last Updated on May 3, 2021

About

What is dimensionality reduction?

Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to its intrinsic dimension.

Here are some of the benefits of applying dimensionality reduction to a dataset: Space required to store the data is reduced as the number of dimensions comes down. Less dimensions lead to less computation/training time. Some algorithms do not perform well when we have a large dimensions.

Dimensionality reduction refers to techniques for reducing the number of input variables in training data. When dealing with high dimensional data, it is often useful to reduce the dimensionality by projecting the data to a lower dimensional subspace which captures the “essence” of the data.


TYPES:

Principal Component Analysis (PCA)

PCA is a technique from linear algebra that can be used to automatically perform dimensionality reduction.



Linear Discriminent Analysis (LDA)

Linear Discriminant Analysis, or LDA for short, is a predictive modeling algorithm for multi-class classification. It can also be used as a dimensionality reduction technique, providing a projection of a training dataset that best separates the examples by their assigned class.


Kernel – PCA 

PCA linearly transforms the original inputs into new uncorrelated features. KPCA is a nonlinear PCA. As the name suggests Kernal trick is used to make KPCA nonlinear.


Problem description:

the dataset is taken from UCI ML repository. dataset is of wine where each row is of different wine with 10 different features: Alcohol, Malic_Acid, Ash, sh_Alcanity, Magnesium, Total_Phenols, Flavanoids, Nonflavanoid_Phenols, Proanthocyanins, Color_Intensity, Hue, OD280, Proline, Customer_Segment.


It is a business case study ,I have to apply clustering to identify diverse segments of customers grouped by their taste of similar wine preferences where there are 3 categories . now for the owner of this wine shop I have to build a predictive model that will be trained on this data so that for each new wine that the owner has in his shop we can deploy the predictive model applied to reduced dimensionality reduction ,then predict which customer segment does this new wine belongs to . so that finally we can recommend the right wine for the right customer to optimise the sales n profit.


DATASET:

RESULT OF ALL THE 3 :

Principal Component Analysis

Linear Discriminant Analysis

Kernel PCA

More Details: DIMENSIONALITY REDUCTION

Submitted By


Hyderabad House Price Predictor

Last Updated on May 3, 2021

About

Hyderabad House Price Predictor


ML model which predicts the price of a house based on features like total Sq. ft area,total number of bedrooms,balconies etc.

The front-end of this model is made by boot-strap and Flask,where as the backend is a Machine learning model which is trained on the housing-price dataset and the algorithm used is Random-Forest

the model is hosted at------> https://homepricepredictor.herokuapp.com/



General Overview of the Project 


Starting of with the home page which is designed using bootstrap classes,here we in this template the general overview of the project is mentioned,along with that the parameters which are required for predicting the price of the house are also mentioned here,here's a glimpse of it