Python World Map Geovisualization Dashboard Using Covid DataLast Updated on May 3, 2021
1) Learn a cool hack using one line of code to convert a jupyter nootebook into a dashboard.
2) Work with "COVID-19 Data Repository by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University"
3)After you complete this project, you get a jupyter notebook of all the work you covered (including gifs). It acts as a useful learning tool that you can refer to at any time in the future.
4) Important terminology and definitions are explained.
Share with someone who needs it
Navassist AiLast Updated on May 3, 2021
Incorporating machine learning, and haptic feedback, NavAssistAI detects the position and state of a crosswalk light, which enables it to aid the visually impaired in daily navigation.
One day, we were perusing youtube looking for an idea for our school's science fair. On that day, we came across a blind YouTuber named Tommy Edison. He had uploaded a video of himself attempting to cross a busy intersection on his own. It was apparent that he was having difficulty, and at one point he almost ran into a street sign. After seeing his video, we decided that we wanted to leverage new technology to help people like Tommy in daily navigation, so we created NavAssist AI.
What it does
In essence, NavAssist AI uses object detection to detect both the position and state of a crosswalk light (stop hand or walking person). It then processes this information and relays it to the user through haptic feedback in the form of vibration motors inside a headband. This allows the user to understand whether it is safe to cross the street or not, and in which direction they should face when crossing.
How we built it
We started out by gathering our own dataset of 200+ images of crosswalk lights because there was no existing library for those images. We then ran through many iterations on many different models, training each model on this data set. Through the different model architectures and iterations, we strove to find a balance between accuracy and speed. We eventually discovered an SSDLite Mobilenet model from the TensorFlow model zoo had the balance we required. Using transfer learning and many iterations we trained a model that finally worked. We implemented it onto a raspberry pi with a camera, soldered on a power button and vibration motors, and custom-designed a 3D printed case with room for a battery. This made our prototype wearable device.
Challenges we ran into
When we started this project, we knew nothing about machine learning or TensorFlow and had to start from scratch. However, with some googling and trying stuff out, we were able to figure out how to implement TensorFlow for our project with relative ease. Another challenge was collecting, preparing, and labelling our data set of 200+ images. Although, our most important challenge was not knowing what it's like to be visually impaired. To overcome this, we had to go out to people in the blind community and talk to them so that we could properly understand the problem and create a good solution.
Accomplishments that we're proud of
- Making our first working model that could tell the difference between stop and go
- Getting the haptic feedback implementation to work with the Raspberry Pi
- When we first tested the device and successfully crossed the street
- When we presented our work at TensorFlow World 2019
All of these milestones made us very proud because we are progressing towards something that could really help people in the world.
What we learned
Throughout the development of this project, we learned so much. Going into it, we had no idea what we were doing. Along the way, we learned about neural networks, machine learning, computer vision, as well as practical skills such as soldering and 3D CAD. Most of all, we learned that through perseverance and determination, you can make progress towards helping to solve problems in the world, even if you don't initially think you have the resources or knowledge.
What's next for NavAssistAI
We hope to expand its ability for detecting objects. For example, we would like to add detection for things such as obstacles so that it may aid in more than just crossing the street. We are also working to make the wearable device smaller and more portable, as our first prototype can be somewhat burdensome. In the future, we hope to eventually reach a point where it will be marketable, and we can start helping people everywhere.
Iris Flower PredictionLast Updated on May 3, 2021
Understanding the scenario
Let’s assume that a hobby botanist is interested in distinguishing the species of some iris flowers that she has found. She has collected some measurements associated with each iris, which are:
- the length and width of the petals
- the length and width of the sepals, all measured in centimetres.
She also has the measurements of some irises that have been previously identified by an expert botanist as belonging to the species setosa, versicolor, or virginica. For these measurements, she can be certain of which species each iris belongs to. We will consider that these are the only species our botanist will encounter.
The goal is to create a machine learning model that can learn from the measurements of these irises whose species are already known, so that we can predict the species for the new irises that she has found.
- SkLearn is a pack of Python modules built for data science applications (which includes machine learning). Here, we’ll be using three particular modules:
- load_iris: The classic dataset for the iris classification problem. (NumPy array)
- train_test_split: method for splitting our dataset.
- KNeighborsClassifier: method for classifying using the K-Nearest Neighbor approach.
- NumPy is a Python library that makes it easier to work with N-dimensional arrays and has a large collection of mathematical functions at its disposal. It’s’ base data type is the “numpy.ndarray”.
Building our model
As we have measurements for which we know the correct species of iris, this is a supervised learning problem. We want to predict one of several options (the species of iris), making it an example of a classification problem. The possible outputs (different species of irises) are called classes. Every iris in the dataset belongs to one of three classes considered in the model, so this problem is a three-class classification problem. The desired output for a single data point (an iris) is the species of the flower considering it’s features. For a particular data point, the class / species it belongs to is called its label.
As already stated, we will use the Iris Dataset already included in scikit-learn.
Now, let’s print some interesting data about our dataset:
ACCURACY we get an accuracy of 93%
OUTPUT IN THIS CASE as we have 2 samples [[3,5,4,2], [2,3,5,4]]
so the iris type predicted by our model based on the given features are
predictions: ['versicolor', 'virginica']
for more details this is my Github repository
Personal AssistanceLast Updated on May 3, 2021
PERSONAL COMPUTER ASSISTANT
This project work same as siri in iphone google assistance in android.
It will take input from user voice and will manage many things like youtube, google, stackoverflow, date, time
can play music and can shut-down your pc as well.
Necessary pip installation requires are
1. pip install pyttsx3
2. pip install speechRecognition
3. pip install wikipedia
4. pip install webbrowser
5. pip install pipwin
6. pip install PyAudio
What is it capable of doing ?
Input format :
This piece of code tak input from the voice of user
1. strongly assist and open Google,Youtube,Tell date, time, open stackoverflow,gmail and play music for user.
2. This piece of code can also shut down your pc if you ask it
to shutdown and give premission of 'yes'.
Module installation instruction:
1. Please install the modules written at the beggining of the
code in case it throw error.(in mine pc it's working smoothly).
2. Use pip install module_name to install packages or modules.
- Please run this code on jupyter notebook or any offline text edidtor ie (VS Code, Pycharm, Atom, Sublime)
- Please install the module in case not installed using pip install modulename.
- Module list are written at the begging of the code
Air Quality Analysis And Prediction Of Italian CityLast Updated on May 3, 2021
- The value of CO in mg/m^3 reference value with respect to the available data. Please assume if you need, but do specify the same.
- The value pf CO in mg/m^3 for the next 3 3 weeks on hourly averaged concentration
Data Set Information
located on the field in a significantly polluted area, at road level,within an Italian city. Data were recorded from March 2004 to February 2005 (one year)representing the longest freely available recordings of on field deployed air quality chemical sensor devices responses. Ground Truth hourly averaged concentrations for CO, Non Metanic Hydrocarbons, Benzene, Total Nitrogen Oxides (NOx) and Nitrogen Dioxide (NO2) and were provided by a co-located reference certified analyzer. Evidences of cross-sensitivities as well as both concept and sensor drifts are present as described in De Vito et al., Sens. And Act. B, Vol. 129,2,2008 (citation required) eventually affecting sensors concentration
0 Date (DD/MM/YYYY)
1 Time (HH.MM.SS)
2 True hourly averaged concentration CO in mg/m^3 (reference analyzer)
3 PT08.S1 (tin oxide) hourly averaged sensor response (nominally CO targeted)
4 True hourly averaged overall Non Metanic HydroCarbons concentration in microg/m^3 (reference analyzer)
5 True hourly averaged Benzene concentration in microg/m^3 (reference analyzer)
6 PT08.S2 (titania) hourly averaged sensor response (nominally NMHC targeted)
7 True hourly averaged NOx concentration in ppb (reference analyzer)
8 PT08.S3 (tungsten oxide) hourly averaged sensor response (nominally NOx targeted)
9 True hourly averaged NO2 concentration in microg/m^3 (reference analyzer)
10 PT08.S4 (tungsten oxide) hourly averaged sensor response (nominally NO2 targeted)
11 PT08.S5 (indium oxide) hourly averaged sensor response (nominally O3 targeted)
12 Temperature in Â°C
13 Relative Humidity (%)
14 AH Absolute Humidity.
Determination Of A Person’S HealthLast Updated on May 3, 2021
Determination of person’s health
The project was built with the intend of helping the society. It has been calculated that approx. 1.9 billion people die due to health-related problems every year. This rate is very high, and the disease is easily preventable
The project has been made with the help of Data Analysis and Machine Learning using Python with a GUI output page. In this project, the machine will analyse the already present data first and then conclude upon a person’s health on his/her given factors.
In this project, gender and either height or weight will be given to the machine. If the height is given then the weight will be predicted and vice-versa. Through these predictions the machine will tell us about the health of a person.
The main goal is to help the society for its betterment as far as health is concerned.
The data set used is from UCI repository. It includes four attributes-
The machine will be trained in these aspects to determine a person’s health or weight and the category it will lie in.
The categories are-
1. 0 – Underweight
2. 1 – Normal weight
3. 2 – Healthy
4. 3 – Over weight
5. 4 – Obesity
The methods followed in chronological form are-
1. Loading dataset (using pandas library)
2. Dataset cleaning (using pandas and numpy libraries)
3. Dataset pre-processing
4. Data visualization (using seaborn, matplotlib and matplotlib.pyplot libraries)
4.1 Univariate analysis
4.2 Bivariate analysis
5. Correlation matrix
The machine learning algorithms applied were-
1. Linear Regression
2. Logistic Regression
3. KNN Classifier
4. Decision Tree Classifier
5. Random Forest Classifier
Random Forest Classifier gave highest accuracy of about 95% while logistic regression gave the leas with about 76%.
The user in the GUI page will be asked:
1. Full name
3. Whether they know their height or weight
4. Their height or weight