Word Search GameLast Updated on May 3, 2021
This is the most funniest and very simple game .The main aim is to find or
guess the word with in the given trails.
if you fail to guess a word with in a given trails, you loose the game
you will win
Share with someone who needs it
Emotional Analysis Based Content Recommendation SystemLast Updated on May 3, 2021
As the saying goes, “We are what we see”; the content we see may have an adverse effect on our behavior sometimes. Especially in a country like India, where numerous films and TV series are highly prominent, there are great chances of watching explicit or disturbing content randomly. This may have adverse effects on behavior of people, especially children. And we also know “Prevention is better than cure”. Preventing inappropriate content from going online can be more effective than banning them after release.
To achieve this, we aim to create a content filtering and recommendation system that either recommends a film or TV series or alerts a user with a warning message saying it’s not recommended to watch. Netflix or any other Over-the-top (OTT) platforms perform a filtering process before they buy digital rights for any content. This is where our tool comes handy. It detects absurd or hard emotion inducing content with the help of human emotions. Through this project we aim to create a content detector based on human emotion recognition. We will project scenes to test audience and capture their live emotions.
Then we use “Facebook Deep Face”, a pre-defined CNN based face recognition and facial emotion analysis model to identify faces and analyze their emotions. We use “Deep Learning” methods to recognize facial expressions and then make use of Circumplex Model proposed by James Russell to classify emotions based on arousal and valence values. Based on majority emotion that is projected by audience we would either recommend or not recommend the content for going on-air. This system prevents inappropriate content from going on-air
Telecom Churn PredictionLast Updated on May 3, 2021
This case requires trainees to develop a model for predicting customer churn at a fictitious wireless telecom company and use insights from the model to develop an incentive plan for enticing would-be churners to remain with company. Data for the case are available in csv format. The data are a scaled down version of the full database generously donated by an anonymous wireless telephone company. There are still 7043 customers in the database, and 20 potential predictors. Candidates can use whatever method they wish to develop their machine learning model. The data are available in one data file with 7043 rows that combines the calibration and validation customers. “calibration” database consisting of 4000 customers and a “validation” database consisting of 3043 customers. Each database contained (1) a “churn” variable signifying whether the customer had left the company two months after observation, and (2) a set of 20 potential predictor variables that could be used in a predictive churn model. Following usual model development procedures, the model would be estimated on the calibration data and tested on the validation data. This case requires both statistical analysis and creativity/judgment. I recommend you pend much time on both fine-tuning and interpreting results of your machine learning model.
My Rewards - Alexa SkillLast Updated on May 3, 2021
An Alexa skill to reward kids for good behavior.
After building http://eFamilyBoard.com I decided to purchase an Alexa Show (2nd gen) for comparison. eFamilyBoard has a few nicer features but over all the Alexa is much more powerful and scalable. One heavily used feature was the sticker board on eFamilyBoard that didn't have a comparable skill on Alexa. As a result, I decided to build My Rewards skill to replace it.
What it does
It allows families and teachers to reward kids for good behavior. The user ultimately decides what to do with the rewards. Personally, our kids earn $5 after they've earned 10 total rewards, then they start over. The user can add recipients and give multiple rewards at a time. For example, "Alexa give John 5 stickers" or "Alexa take away 2 stickers from John". And if you don't know what reward type of reward to give or take away you can always simply say "rewards" in the place of the type of reward (e.g. football, sticker, heart, unicorn, truck, cookie, doughnut, etc).
How I built it
I built it with the ASK CLI and Visual Studio Code. I started with the sample hello world app and refactored it to utilize typescript, express, and ngrok to run locally. I also used mocha with chai for unit tests that run and must pass before I can deploy to AWS.
Challenges I ran into
I learned to get stated by taking a course on Udemy but they didn't use typescript and deployed to AWS for every change. That would take FOREVER to debug and build efficiently. I setup a simple express app and use ngrok to route calls to my local machine. This allows me to talk to my Alexa and debug by stepping through the code in VS Code.
Accomplishments that I'm proud of
Project setup, local debugging with typescript, and tests with 90%+ code coverage. Not only does it work for voice but it also supports display templates to show the user what rewards each participant has earned. I was going to add ISP down the road but decided to do it from the start and it ended up being easier than expected. For being my first Alexa app I think the app works extremely well and my kids started to utilize it with no learning curve.
What I learned
Being my first Alexa app I learned a TON. From how to simply use Alexa (still learning tricks) to how to interact with voice commands. I've also never used DynamoDB but the Alexa SDK made that super easy.
What's next for My Rewards
Add support for more languages. My family has been using it for development but exited to see what others think of it.
Finding Donors For Charity MlLast Updated on May 3, 2021
In this project, you will employ several supervised algorithms of your choice to accurately model individuals' income using data collected from the 1994 U.S. Census. You will then choose the best candidate algorithm from preliminary results and further optimize this algorithm to best model the data. Your goal with this implementation is to construct a model that accurately predicts whether an individual makes more than $50,000. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual's income can help a non-profit better understand how large of a donation to request, or whether or not they should reach out to begin with. While it can be difficult to determine an individual's general income bracket directly from public sources, we can (as we will see) infer this value from other publically available features.
The dataset for this project originates from the UCI Machine Learning Repository. The datset was donated by Ron Kohavi and Barry Becker, after being published in the article "Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid". You can find the article by Ron Kohavi online. The data we investigate here consists of small changes to the original dataset, such as removing the 'fnlwgt' feature and records with missing or ill-formatted entries.
Dimensionality ReductionLast Updated on May 3, 2021
What is dimensionality reduction?
Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to its intrinsic dimension.
Here are some of the benefits of applying dimensionality reduction to a dataset: Space required to store the data is reduced as the number of dimensions comes down. Less dimensions lead to less computation/training time. Some algorithms do not perform well when we have a large dimensions.
Dimensionality reduction refers to techniques for reducing the number of input variables in training data. When dealing with high dimensional data, it is often useful to reduce the dimensionality by projecting the data to a lower dimensional subspace which captures the “essence” of the data.
Principal Component Analysis (PCA)
PCA is a technique from linear algebra that can be used to automatically perform dimensionality reduction.
Linear Discriminent Analysis (LDA)
Linear Discriminant Analysis, or LDA for short, is a predictive modeling algorithm for multi-class classification. It can also be used as a dimensionality reduction technique, providing a projection of a training dataset that best separates the examples by their assigned class.
Kernel – PCA
PCA linearly transforms the original inputs into new uncorrelated features. KPCA is a nonlinear PCA. As the name suggests Kernal trick is used to make KPCA nonlinear.
the dataset is taken from UCI ML repository. dataset is of wine where each row is of different wine with 10 different features: Alcohol, Malic_Acid, Ash, sh_Alcanity, Magnesium, Total_Phenols, Flavanoids, Nonflavanoid_Phenols, Proanthocyanins, Color_Intensity, Hue, OD280, Proline, Customer_Segment.
It is a business case study ,I have to apply clustering to identify diverse segments of customers grouped by their taste of similar wine preferences where there are 3 categories . now for the owner of this wine shop I have to build a predictive model that will be trained on this data so that for each new wine that the owner has in his shop we can deploy the predictive model applied to reduced dimensionality reduction ,then predict which customer segment does this new wine belongs to . so that finally we can recommend the right wine for the right customer to optimise the sales n profit.
RESULT OF ALL THE 3 :
Principal Component Analysis
Linear Discriminant Analysis