Flask Blog

Last Updated on May 3, 2021

About

This project was done as course project under the course:

Python Flask Tutorial: Full-Featured Web App video series by Corey Schafer.

Project is based on a blog website.

·        Developed a full working blog website using Flask Framework in Python language.

·        User specific features are creating an account, login using the registered credentials, and editing account information at any given time including the display picture.

·        Posts specific features are creating, updating and deleting a post.

·        Integrated SQLAlchemy to maintain users and posts databases.

·       Programmed a function to create a secret key using Bcrypt and send it to the user for the forgot password feature via mail using flask_mail.

·        Used Flask Blueprint to maintain packages such as users, posts and other pages and their specific files for routing, forms and models.


More Details: Flask Blog

Submitted By


Share with someone who needs it

Image Processing

Last Updated on May 3, 2021

About

What is image processing ?

The aim of pre-processing is an improvement of the image data that suppresses unwilling distortions or enhances some image features important for further processing, although geometric transformations of images (e.g. rotation, scaling, translation) are classified among pre-processing methods here since similar.

Preprocessing refers to all the transformations on the raw data before it is fed to the machine learning or deep learning algorithm. For instance, training a convolutional neural network on raw images will probably lead to bad classification performances.


convolutional neural network

convolutional neural network (CNN) is a type of artificial neural network used in image recognition and processing that is specifically designed to process pixel data. ... A neural network is a system of hardware and/or software patterned after the operation of neurons in the human brain.


CNNs are used for image classification and recognition because of its high accuracy. The CNN follows a hierarchical model which works on building a network, like a funnel, and finally gives out a fully-connected layer where all the neurons are connected to each other and the output is processed.


Problem description:  Case study , We have a dataset has 3 subfolders inside it are single prediction contains only 2 images to test the model and prediction so that we know our CNN model is working , test set with 2000 images (1000 of dogs and 1000 of cats) where we will evaluate our model , training set contains 8000 images 4000 of cats and 4000 of dogs as we are going to train our CNN model on these images of dogs and cats . so basically our CNN model is going to predict whether the image given is of a a cat or a dog. By generating random number on google then choosing the image .  Eg: cat


Prediction for CAT

PREDICTION FOR DOG

More Details: IMAGE PROCESSING

Submitted By


Smart Health Monitoring App

Last Updated on May 3, 2021

About

The proposed solution will be an online mobile based application. This app will contain information regarding pre and post maternal session. The app will help a pregnant lady to know about pregnancy milestone and when to worry and when to not. According to this app, user needs to register by entering name, age, mobile number and preferred language. The app will be user friendly making it multi-lingual and audio-video guide to help people who have impaired hearing or sight keeping in mind women who reside in rural areas and one deprived of primary education. The app will encompass two sections pre-natal and post- natal.

           In case of emergency i.e. when the water breaks (indication) there will be a provision to send emergency message (notification) that will be sent to FCM (Firebase Cloud Messaging), it then at first tries to access the GPS settings in cell, in case the GPS isn’t on, Geolocation API will be used. Using Wi-Fi nodes that mobile device can detect, Internet, Google’s datasets, nearby towers, a precise location is generated and sent via Geocoding to FCM, that in turn generates push notifications, and the tokens will be sent to registered user’s, hospitals, nearby doctors, etc. and necessary actions will be implemented, so that timely            help will be provided

More Details: Smart Health Monitoring App

Submitted By


Bakery Management Api

Last Updated on May 3, 2021

About

# Bakery Management System


This **Bakery Management System** is based on Django Rest framework and Uses the Token Authentications.

For better understanding read the documentation at [docs](https://documenter.getpostman.com/view/14584052/TWDdjZYb).<br>

The project is live at [Bakery](https://bakery-management-api.herokuapp.com/).


## Steps to run the API:


1. Install requirements.txt 

2. Run- python manage.py makemigrations

3. Run- python manage.py migrate

4. Run- python manage.py runserver 




## API-OVERVIEW


Now enter http://127.0.0.1:8000/ in your Browser this will give the details about the functionality offered.

To perform any of the operations mentioned just add the corresponding relative url to this http://127.0.0.1:8000/ .


***Note : All the endpoints corresponding to Ingredients and Dishes are only accessible to ADMIN.***





## MANAGING THE ACCOUNTS REGISTRATION AND AUTHENTICATION




### Registering a ADMIN


We can only register admin through the Django admin panel. To acces Django Admin panel you have to create a superuser

Follows these steps to register an ADMIN user:

1. python manage.py createsuperuser

2. Fill all details(username ,email and pasword)

3. Now got to http://127.0.0.1:8000/admin/ and login through the credentials you just entered.

4. Register admin through the USERS section(please tick the is_staff then only you will be considered as ADMIN)



### Registering a CUSTOMER


URL - http://127.0.0.1:8000/accounts/register/ REQUEST-TYPE =[POST] **:**


This uses POST request and expects username,email,password,first_name,last_name to be entered through JSON object or a Form data.The username needs to be UNIQUE



### LOGGING IN A USER


URL - http://127.0.0.1:8000/accounts/login/ REQUEST-TYPE =[POST] **:**


This uses POST request and expects username and password.After successfull login this will return a Token and Expiry.

Expiry denotes for how long is the token valid ,after the expiry you need to login again.



### LOGGING OUT A USER


URL - http://127.0.0.1:8000/accounts/logout/ REQUEST-TYPE =[] **:**

For this provide the token in the header.The user whose token you entered will be logged out.






## OPERATIONS ON INGREDIENTS(ACCESSIBLE ONLY TO ADMINS)



### Adding an Ingredient


URL - http://127.0.0.1:8000/ingredients/ REQUEST-TYPE =[POST]  **:**

This uses POST request and expects name,quantity,quantity_type,cost_price to be entered through JSON object or a Form data.The name needs to be UNIQUE

and Django adds a primary key by name "id" by default.The quantity_type contains three choices only in which you can enter a single one either 'kg' for

kilogram ,'lt' for litre and "_" for only numbers.



### Get list of all Ingredients


URL - http://127.0.0.1:8000/ingredients/  REQUEST-TYPE =[GET]  **:**

This returns a Json value containing the list of all ingredients.



### Getting details of a single Ingredients


URL - http://127.0.0.1:8000/ingredients/id/ REQUEST-TYPE =[GET]  **:**

The "id" mentioned in the above url must be an integer referring to the "id" of the ingredient you want to fetch.This returns details of the 

ingredient you mentioned.



### Deleting a single Ingredients


URL - http://127.0.0.1:8000/ingredients/id/ REQUEST-TYPE =[DELETE]  **:**

The "id" mentioned in the above url must be an integer referring to the "id" of the ingredient you want to fetch.This deletes the 

ingredient you mentioned.






## OPERATIONS ON MENU(ACCESSIBLE ONLY TO ADMINS)




### Adding an dish to menu


URL - http://127.0.0.1:8000/menu/ REQUEST-TYPE =[POST]  **:**

This uses POST request and expects name , quantity , description , cost_price , selling_price , ingredients to be entered through JSON object or a Form data.

The name needs to be UNIQUE and Django adds a primary key by name "id" by default.The ingredients field can contain multiple ingredients id.



### Get list of all dishes(Available to CUSTOMER also)


URL - http://127.0.0.1:8000/menu/  REQUEST-TYPE =[GET]   **:**

This returns a Json value containing the list of details of all dishes.


***Note-This API depend on the type of User logged in. If the Customer user is logged in than this will the name and prices only***



### Getting details of a single Dish


URL - http://127.0.0.1:8000/menu/id/ REQUEST-TYPE =[GET]   **:**

The "id" mentioned in the above url must be an integer referring to the "id" of the Dish you want to fetch.This returns details of the 

Dish you mentioned.



### Deleting a single Dish


URL - http://127.0.0.1:8000/ingredients/id/ REQUEST-TYPE =[DELETE]  **:**

The "id" mentioned in the above url must be an integer referring to the "id" of the Dish you want to fetch.This deletes the 

Dish you mentioned






## OPERATIONS ON ORDER(ACCESSIBLE TO THE CUSTOMER )



### Adding/Placing an order 


URL - http://127.0.0.1:8000/order/ REQUEST-TYPE =[POST]   **:** 

This uses POST request and expects orderby,items_ordered to be entered through JSON object or a Form data.Django adds a primary key by name "id" by default.

The items_ordered field can contain multiple dishes id.



### Getting details of a single Order 


URL - http://127.0.0.1:8000/order/id/ REQUEST-TYPE =[GET]   **:**  

The "id" mentioned in the above url must be an integer referring to the "id" of the Order you want to fetch.This returns details of the 

Order you mentioned.


### Deleting a single Order 


URL - http://127.0.0.1:8000/order/ REQUEST-TYPE =[DELETE]   **:**  

The "id" mentioned in the above url must be an integer referring to the "id" of the Order you want to delete.This deletes the 

Order you mentioned


### Order History 


URL - http://127.0.0.1:8000/order/history/ REQUEST-TYPE =[GET]  **:**  

This will return the all the orders placed by the Customer making the request.(Latest first) 


More Details: Bakery Management API

Submitted By


Covid Tracket On Twitter Using Data Science And Ai

Last Updated on May 3, 2021

About

Introduction

Hi folks, I hope you are doing well in these difficult times! We all are going through the unprecedented time of the Corona Virus pandemic. Some people lost their lives, but many of us successfully defeated this new strain i.e. Covid-19. The virus was declared a pandemic by World Health Organization on 11th March 2020. This article will analyze various types of “Tweets” gathered during pandemic times. The study can be helpful for different stakeholders.

For example, Government can make use of this information in policymaking as they can able to know how people are reacting to this new strain, what all challenges they are facing such as food scarcity, panic attacks, etc. Various profit organizations can make a profit by analyzing various sentiments as one of the tweets telling us about the scarcity of masks and toilet papers. These organizations can able to start the production of essential items thereby can make profits. Various NGOs can decide their strategy of how to rehabilitate people by using pertinent facts and information.

In this project, we are going to predict the Sentiments of COVID-19 tweets. The data gathered from the Tweeter and I’m going to use Python environment to implement this project.

 

Problem Statement

The given challenge is to build a classification model to predict the sentiment of Covid-19 tweets. The tweets have been pulled from Twitter and manual tagging has been done. We are given information like Location, Tweet At, Original Tweet, and Sentiment.

Approach To Analyze Various Sentiments

Before we proceed further, One should know what is mean by Sentiment Analysis. Sentiment Analysis is the process of computationally identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer’s attitude towards a particular topic is Positive, Negative, or Neutral. (Oxford Dictionary)

Following is the Standard Operating Procedure to tackle the Sentiment Analysis kind of project. We will be going through this procedure to predict what we supposed to predict!

  1. Exploratory Data Analysis.

  2. Data Preprocessing.

  3. Vectorization.

  4. Classification Models.

  5. Evaluation.

  6. Conclusion.

Let’s Guess some tweets

I will read the tweet and can you tell me the sentiment of that tweet whether it is Positive, Negative, Or Neutral. So the first tweet is “Still shocked by the number of #Toronto supermarket employees working without some sort of mask. We all know by now, employees can be asymptomatic while spreading #coronavirus”. What’s your guess? Yeah, you are correct. This is a Negative tweet because it contains negative words like “shocked”.

If you can’t able to guess the above tweet, don’t worry I have another tweet for you. Let’s guess this tweet-“Due to the Covid-19 situation, we have increased demand for all food products. The wait time may be longer for all online orders, particularly beef share and freezer packs. We thank you for your patience during this time”. This time you are absolutely correct in predicting this tweet as “Positive”. The words like “thank you”, “increased demand” are optimistic in nature hence these words categorized the tweet into positive.

Data Summary

The original dataset has 6 columns and 41157 rows. In order to analyze various sentiments, We require just two columns named Original Tweet and Sentiment. There are five types of sentiments- Extremely Negative, Negative, Neutral, Positive, and Extremely Positive as you can see in the following picture.

Summary Of Dataset

 

Basic Exploratory Data Analysis

The columns such as “UserName” and “ScreenName” do not give any meaningful insights for our analysis. Hence we are not using these features for model building. All the tweets data collected from the months of March and April 2020. The following Bar plot shows us the number of unique values in each column.

There are some null values in the location column but we don’t need to deal with them as we are just going to use two columns i.e. “Sentiment” and “Original Tweet”. Maximum tweets came from London(11.7%) location as evident from the following figure.

There are some words like ‘coronavirus’, ‘grocery store’, having the maximum frequency in our dataset. We can see it from the following word cloud. There are various #hashtags in the tweets column. But they are almost the same in all sentiments hence they are not giving us meaningful full information.

World Cloud showing the words having a maximum frequency in our Tweet column

When we try to explore the ‘Sentiment’ column, we came to know that most of the peoples are having positive sentiments about various issues shows us their optimism during pandemic times. Very few people are having extremely negatives thoughts about Covid-19.

 

Data Pre-processing

The preprocessing of the text data is an essential step as it makes the raw text ready for mining. The objective of this step is to clean noise those are less relevant to find the sentiment of tweets such as punctuation(.,?,” etc.), special characters(@,%,&,$, etc.), numbers(1,2,3, etc.), tweeter handle, links(HTTPS: / HTTP:)and terms which don’t carry much weightage in context to the text.

Also, we need to remove stop words from tweets. Stop words are those words in natural language that have very little meaning, such as “is”, “an”, “the”, etc. To remove stop words from a sentence, you can divide your text into words and then remove the word if it exists in the list of stop words provided by NLTK.

Then we need to normalize tweets by using Stemming or Lemmatization. “Stemming” is a rule-based process of stripping the suffixes (“ing”, “ly”, “es”, “ed”, “s” etc) from a word. For example — “play”, “player”, “played”, “plays” and “playing” are the different variations of the word — “play”.

Stemming will not convert original words into meaningful words. As you can see “considered” gets stemmed into “condit” which does not have meaning and a spelling mistake too. The better way is to use Lemmatization instead of stemming process.

Lemmatization is a more powerful operation, and it takes into consideration the morphological analysis of the words. It returns the lemma which is the base form of all its inflectional forms.

 

Here in the Lemmatization process, we are converting the word “raising” to its basic form “raise”. We also need to convert all tweets into the lower case before we do the normalization process.

We can include the process of tokenization. In tokenization, we convert a group of sentences into tokens. It is also called text segmentation or lexical analysis. It is basically splitting data into a small chunk of words. Tokenization in python can be done by the python NLTK library’s word_tokenize() function.

Vectorization

We can use a count vectorizer or a TF-IDF vectorizer. Count Vectorizer will create a sparse matrix of all words and the number of times they are present in a document.

TFIDF, short for term frequency-inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. The TF–IDF value increases proportionally to the number of times a word appears in the document and is offset by the number of documents in the corpus that contain the word, which helps to adjust for the fact that some words appear more frequently in general. (wiki)

Building Classification Models

The given problem is Ordinal Multiclass classification. There are five types of sentiments so we have to train our models so that they can give us the correct label for the test dataset. I am going to built different models like Naive Bayes, Logistic Regression, Random Forest, XGBoost, Support Vector Machines, CatBoost, and Stochastic Gradient Descent.

I have used the given problem of Multiclass Classification that is dependent variable has the values -Positive, Extremely Positive, Neutral, Negative, Extremely Negative. I also convert this problem into binary classification i.e. I clubbed all tweets into just two types Positive and Negative. You can also go for three-class classification i.e. Positive, Negative and Neutral in order to achieve greater accuracy. In the evaluation phase, we will be comparing the results of these algorithms.

Feature Importance

The feature importance (variable importance) describes which features are relevant. It can help with a better understanding of the solved problem and sometimes lead to model improvements by employing feature selection. The top three important feature words are panic, crisis, and scam as we can see from the following graph.

Conclusion

In this way, we can explore more from various textual data and tweets. Our models will try to predict the various sentiments correctly. I have used various models for training our dataset but some models show greater accuracy while some do not. For multiclass classification, the best model for this dataset would be CatBoost. For binary classification, the best model for this dataset would be Stochastic Gradient Descent.

More Details: Covid Tracket on Twitter using Data Science and AI

Submitted By


Navassist Ai

Last Updated on May 3, 2021

About

Incorporating machine learning, and haptic feedback, NavAssistAI detects the position and state of a crosswalk light, which enables it to aid the visually impaired in daily navigation.

Inspiration

One day, we were perusing youtube looking for an idea for our school's science fair. On that day, we came across a blind YouTuber named Tommy Edison. He had uploaded a video of himself attempting to cross a busy intersection on his own. It was apparent that he was having difficulty, and at one point he almost ran into a street sign. After seeing his video, we decided that we wanted to leverage new technology to help people like Tommy in daily navigation, so we created NavAssist AI.

What it does

In essence, NavAssist AI uses object detection to detect both the position and state of a crosswalk light (stop hand or walking person). It then processes this information and relays it to the user through haptic feedback in the form of vibration motors inside a headband. This allows the user to understand whether it is safe to cross the street or not, and in which direction they should face when crossing.

How we built it

We started out by gathering our own dataset of 200+ images of crosswalk lights because there was no existing library for those images. We then ran through many iterations on many different models, training each model on this data set. Through the different model architectures and iterations, we strove to find a balance between accuracy and speed. We eventually discovered an SSDLite Mobilenet model from the TensorFlow model zoo had the balance we required. Using transfer learning and many iterations we trained a model that finally worked. We implemented it onto a raspberry pi with a camera, soldered on a power button and vibration motors, and custom-designed a 3D printed case with room for a battery. This made our prototype wearable device.

Challenges we ran into

When we started this project, we knew nothing about machine learning or TensorFlow and had to start from scratch. However, with some googling and trying stuff out, we were able to figure out how to implement TensorFlow for our project with relative ease. Another challenge was collecting, preparing, and labelling our data set of 200+ images. Although, our most important challenge was not knowing what it's like to be visually impaired. To overcome this, we had to go out to people in the blind community and talk to them so that we could properly understand the problem and create a good solution.

Accomplishments that we're proud of

  • Making our first working model that could tell the difference between stop and go
  • Getting the haptic feedback implementation to work with the Raspberry Pi
  • When we first tested the device and successfully crossed the street
  • When we presented our work at TensorFlow World 2019

All of these milestones made us very proud because we are progressing towards something that could really help people in the world.

What we learned

Throughout the development of this project, we learned so much. Going into it, we had no idea what we were doing. Along the way, we learned about neural networks, machine learning, computer vision, as well as practical skills such as soldering and 3D CAD. Most of all, we learned that through perseverance and determination, you can make progress towards helping to solve problems in the world, even if you don't initially think you have the resources or knowledge.

What's next for NavAssistAI

We hope to expand its ability for detecting objects. For example, we would like to add detection for things such as obstacles so that it may aid in more than just crossing the street. We are also working to make the wearable device smaller and more portable, as our first prototype can be somewhat burdensome. In the future, we hope to eventually reach a point where it will be marketable, and we can start helping people everywhere.

More Details: NavAssist AI

Submitted By