Performing Analysis Of Meteorological DataLast Updated on May 3, 2021
The principle objective of this project is to change the raw data into some useful and easily understandable data which will be ready to do some manipulations to it. Since weather data is one of the most easily available data on the internet, it serves as a great starting point to understand fundamental data analytics concepts.
Hypothesis Of this analysis : “Has the Apparent temperature and humidity compared monthly across 10 years of the data indicate an increase due to Global warming.”
To transform the raw data from the weather dataset into some useful data used for analysis
Share with someone who needs it
Machine Learning (Heart Disease Prediction Model)Last Updated on May 3, 2021
This is web based API model which predicts the probability of having a heart disease
Here I had a dataset of few patients where I had information like CRF, Hypothrodism, HT,DM.
I have splitted the data so that I can train , and then test our prediction by finding out accuracy using various Python Algorithms.
The library used here are numpy , matplotlib, pandas, sklearn and pickle of Python.
I preprocessed the data and performed various splitting options.
I observed various plots using library matplotlib.
I have used numpy and pandas to to read the data and observe various statistical things.
I have used various algorithms like:
Random forest ( model file in github as modelRF.py)
Decision tree ( modelDT.py).
Naive Bayes (modelNB.py)
In each algorithm I fitted my training data, saved model to the disk , loaded the model using Pickle library and then finally compared the result .
All the accuracy was found out for each algorithm and all of them showed accuracy greater than 85%.
All this model building was done in model.py files , modelNB (naive bayes) modelSVM (support vector machine) etc . according to the algorithm
After finding accuracy from every algorithm.
I finally built a model using library flask , request,jsonify,render_template ,keras and loaded the model using pickle .
The final features of the model was predicted and finally created as app.py.
As the model runs on local host we also added various html tags and styling using CSS to make it more presentable.
The code is shared freely on Github platform.
Link added below
Machine Learning Implementation On Crop Health Monitoring System.Last Updated on May 3, 2021
The objective of our study is to provide a solution for Smart Agriculture by monitoring the agricultural field which can assist the farmers in increasing productivity to a great extent. Weather forecast data obtained from IMD (Indian Metrological Department) such as temperature and rainfall and soil parameters repository gives insight into which crops are suitable to be cultivated in a particular area. Thus, the proposed system takes the location of the user as an input. From the location, the soil moisture is obtained. The processing part also take into consideration two more datasets i.e. one obtained from weather department, forecasting the weather expected in current year and the other data being static data. This static data is the crop production and data related to demands of various crops obtained from various government websites. The proposed system applies machine learning and prediction algorithm like Decision Tree, Naive Bayes and Random Forest to identify the pattern among data and then process it as per input conditions. This in turn will propose the best feasible crops according to given environmental conditions. Thus, this system will only require the location of the user and it will suggest number of profitable crops providing a choice directly to the farmer about which crop to cultivate. As past year production is also taken into account, the prediction will be more accurate.
Dimensionality ReductionLast Updated on May 3, 2021
What is dimensionality reduction?
Dimensionality reduction, or dimension reduction, is the transformation of data from a high-dimensional space into a low-dimensional space so that the low-dimensional representation retains some meaningful properties of the original data, ideally close to its intrinsic dimension.
Here are some of the benefits of applying dimensionality reduction to a dataset: Space required to store the data is reduced as the number of dimensions comes down. Less dimensions lead to less computation/training time. Some algorithms do not perform well when we have a large dimensions.
Dimensionality reduction refers to techniques for reducing the number of input variables in training data. When dealing with high dimensional data, it is often useful to reduce the dimensionality by projecting the data to a lower dimensional subspace which captures the “essence” of the data.
Principal Component Analysis (PCA)
PCA is a technique from linear algebra that can be used to automatically perform dimensionality reduction.
Linear Discriminent Analysis (LDA)
Linear Discriminant Analysis, or LDA for short, is a predictive modeling algorithm for multi-class classification. It can also be used as a dimensionality reduction technique, providing a projection of a training dataset that best separates the examples by their assigned class.
Kernel – PCA
PCA linearly transforms the original inputs into new uncorrelated features. KPCA is a nonlinear PCA. As the name suggests Kernal trick is used to make KPCA nonlinear.
the dataset is taken from UCI ML repository. dataset is of wine where each row is of different wine with 10 different features: Alcohol, Malic_Acid, Ash, sh_Alcanity, Magnesium, Total_Phenols, Flavanoids, Nonflavanoid_Phenols, Proanthocyanins, Color_Intensity, Hue, OD280, Proline, Customer_Segment.
It is a business case study ,I have to apply clustering to identify diverse segments of customers grouped by their taste of similar wine preferences where there are 3 categories . now for the owner of this wine shop I have to build a predictive model that will be trained on this data so that for each new wine that the owner has in his shop we can deploy the predictive model applied to reduced dimensionality reduction ,then predict which customer segment does this new wine belongs to . so that finally we can recommend the right wine for the right customer to optimise the sales n profit.
RESULT OF ALL THE 3 :
Principal Component Analysis
Linear Discriminant Analysis
Student Staff Management SystemLast Updated on May 3, 2021
This project was a minor project done by me in my B.Tech 3rd year which was submitted to my department in the same year only. The project was completely done using VB.Net as it's front end and MYSQl as its database for the purpose of data storing and management.
This was a small project which was solely prepared to focus on issues regarding performing basic operations swiftly on data of the staff and the students present in the university such as CRUD(create, retrieve, update, delete) operations on data which could be managed easily and was fast in terms of retrieval and provided to cause less hassle. The languages used in the project were as follows :
1) VB. Net : For front end purposes
2) MYSQL : For database purposes
The database prepared for the project was totally normalized up to 3-NF so that the data stored in the database could be optimized and stored in a effective manner. There are nearly 4-NF to create a relational schema for data storing in which 1-NF being the least optimized to 4-NF be the max. Here I tend to chose the 3-NF as because it could provide me with the max optimization and no data loss. The 4-NF instead optimizes the data base better than the 3-NF but could also provide with lossy data. Hence the optimal choice here was to go with the 3-NF and I chose the same option as I didn't wanted to lose any data in the process .
Anyways after designing the database I went forward with the designing of frontend and did it with the help of .Net in the process . Here I tend to keep the user interface as simple as possible so that a simple person could also use it regardless of the knowledge of computer systems . So I chose a very simple user interface which only focuses on the work in hand and doesn't carry any unnecessary details like designing ,coloring etc etc.
So after completing both these operations I then tried to link my data base with the program so that my front end could access the database running in the background and store and retrieve the data easily and in a efficient way. After linking those two my project was almost complete and was ready to be deployed.
So in short the in my total project I :- Successfully managed to create a centralized management system for the students and the staff of the university which helped to manage and store data more efficiently as compared to the previous model.
P.S : I don't currently have the project link to 2 of my projects. Sorry for that
My Rewards - Alexa SkillLast Updated on May 3, 2021
An Alexa skill to reward kids for good behavior.
After building http://eFamilyBoard.com I decided to purchase an Alexa Show (2nd gen) for comparison. eFamilyBoard has a few nicer features but over all the Alexa is much more powerful and scalable. One heavily used feature was the sticker board on eFamilyBoard that didn't have a comparable skill on Alexa. As a result, I decided to build My Rewards skill to replace it.
What it does
It allows families and teachers to reward kids for good behavior. The user ultimately decides what to do with the rewards. Personally, our kids earn $5 after they've earned 10 total rewards, then they start over. The user can add recipients and give multiple rewards at a time. For example, "Alexa give John 5 stickers" or "Alexa take away 2 stickers from John". And if you don't know what reward type of reward to give or take away you can always simply say "rewards" in the place of the type of reward (e.g. football, sticker, heart, unicorn, truck, cookie, doughnut, etc).
How I built it
I built it with the ASK CLI and Visual Studio Code. I started with the sample hello world app and refactored it to utilize typescript, express, and ngrok to run locally. I also used mocha with chai for unit tests that run and must pass before I can deploy to AWS.
Challenges I ran into
I learned to get stated by taking a course on Udemy but they didn't use typescript and deployed to AWS for every change. That would take FOREVER to debug and build efficiently. I setup a simple express app and use ngrok to route calls to my local machine. This allows me to talk to my Alexa and debug by stepping through the code in VS Code.
Accomplishments that I'm proud of
Project setup, local debugging with typescript, and tests with 90%+ code coverage. Not only does it work for voice but it also supports display templates to show the user what rewards each participant has earned. I was going to add ISP down the road but decided to do it from the start and it ended up being easier than expected. For being my first Alexa app I think the app works extremely well and my kids started to utilize it with no learning curve.
What I learned
Being my first Alexa app I learned a TON. From how to simply use Alexa (still learning tricks) to how to interact with voice commands. I've also never used DynamoDB but the Alexa SDK made that super easy.
What's next for My Rewards
Add support for more languages. My family has been using it for development but exited to see what others think of it.