Book Recommendation System

Last Updated on May 3, 2021

About

Book Recommendation System

Book recommendation is created and deployed in this approach of work, which helps in recommending books. Recommendation achieved by the users feedbacks and rating, this is the online which analyse the ratings, comments and reviews of user, negative positive nature of comments using opinion mining. User searching for the interested book will be displayed in top list and also can read feedback given by people about the book or any searched items. Whenever user search for any book from the large data available, he gets confused from the number of displayed item, which one to choose. In that case recommendation helps and displays on the interested items. This is the trustworthy approach, which is used in this project where selection is based on the dataset.

Clustering

This project used clustering as the central idea. A clustering approach is used. Clustering is based on similarity where similar elements are kept in a single group. Likewise similar element, the irrelevant elements are also reside in a group, which is another group, based on similarity value or maximum size of cluster. The clustering approach which is used in our work is K-mean clustering for grouping of similar users. It is the unsupervised and simplest learning algorithm, which simplifies mining work by grouping similar elements forming cluster. This is done using a parameter called K-centroids. Distance between each element is calculated for checking the similarity and forming a single cluster to reside the similar elements, after comparing with K-centroid parameter.

In this project, 6 clusters were made.

The project is made with 2 separate datsets in .csv format taken from Kaggle.

  1. Books dataset
  2. Ratings

This project is GUI based. The output page has 2 options:

  1. Rate books
  2. Recommend books

The user can chose either according to themselves.

Rate books

In this option, the user can rate books.

Recommend books

In this option the books are recommended to the user, according to their previous readings.

More Details: Book Recommendation System

Submitted By


Share with someone who needs it

Interfacing Of Joystick Using Pic Microcontroller On Lcd Display.

Last Updated on May 3, 2021

About

This basic design consists of a stick that is attached to a plastic base with a flexible rubber sheath. 

The base houses a circuit board that sits directly underneath the stick. The circuit board is made up of several “printed wires,” which connect to several contact terminals. 

Ordinary wires extend from these contact points to the computer.

The printed wires form a simple electrical circuit made up of several smaller circuits.

 The circuits just carry electricity from one contact point to another.

 When the joystick is in the neutral position – when you’re not pushing one way or another – all but one of the individual circuits are broken.

 The conductive material in each wire doesn’t quite connect, so the circuit can’t conduct electricity.

Each broken section is covered with a simple plastic button containing a tiny metal disc. 

When you move the stick in any direction, it pushes down on one of these buttons, pressing the conductive metal disc against the circuit board.

This closes the circuit and it completes the connection between the two wire sections. 

When the circuit is closed, electricity can flow down a wire from the computer (or game console), through the printed wire, and to another wire leading back to the computer .

When the computer picks up a charge on a particular wire, it knows that the joystick is in the right position to complete that particular circuit. 

Pushing the stick forward closes the “forward switch,” pushing it left closes the “left switch,” and so on. 

The firing buttons work exactly the same way – when you press down, it completes a circuit and the computer recognizes a fire command.

More Details: Interfacing of Joystick using PIC microcontroller on LCD display.

Submitted By


Covid Tracket On Twitter Using Data Science And Ai

Last Updated on May 3, 2021

About

Introduction

Hi folks, I hope you are doing well in these difficult times! We all are going through the unprecedented time of the Corona Virus pandemic. Some people lost their lives, but many of us successfully defeated this new strain i.e. Covid-19. The virus was declared a pandemic by World Health Organization on 11th March 2020. This article will analyze various types of “Tweets” gathered during pandemic times. The study can be helpful for different stakeholders.

For example, Government can make use of this information in policymaking as they can able to know how people are reacting to this new strain, what all challenges they are facing such as food scarcity, panic attacks, etc. Various profit organizations can make a profit by analyzing various sentiments as one of the tweets telling us about the scarcity of masks and toilet papers. These organizations can able to start the production of essential items thereby can make profits. Various NGOs can decide their strategy of how to rehabilitate people by using pertinent facts and information.

In this project, we are going to predict the Sentiments of COVID-19 tweets. The data gathered from the Tweeter and I’m going to use Python environment to implement this project.

 

Problem Statement

The given challenge is to build a classification model to predict the sentiment of Covid-19 tweets. The tweets have been pulled from Twitter and manual tagging has been done. We are given information like Location, Tweet At, Original Tweet, and Sentiment.

Approach To Analyze Various Sentiments

Before we proceed further, One should know what is mean by Sentiment Analysis. Sentiment Analysis is the process of computationally identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer’s attitude towards a particular topic is Positive, Negative, or Neutral. (Oxford Dictionary)

Following is the Standard Operating Procedure to tackle the Sentiment Analysis kind of project. We will be going through this procedure to predict what we supposed to predict!

  1. Exploratory Data Analysis.

  2. Data Preprocessing.

  3. Vectorization.

  4. Classification Models.

  5. Evaluation.

  6. Conclusion.

Let’s Guess some tweets

I will read the tweet and can you tell me the sentiment of that tweet whether it is Positive, Negative, Or Neutral. So the first tweet is “Still shocked by the number of #Toronto supermarket employees working without some sort of mask. We all know by now, employees can be asymptomatic while spreading #coronavirus”. What’s your guess? Yeah, you are correct. This is a Negative tweet because it contains negative words like “shocked”.

If you can’t able to guess the above tweet, don’t worry I have another tweet for you. Let’s guess this tweet-“Due to the Covid-19 situation, we have increased demand for all food products. The wait time may be longer for all online orders, particularly beef share and freezer packs. We thank you for your patience during this time”. This time you are absolutely correct in predicting this tweet as “Positive”. The words like “thank you”, “increased demand” are optimistic in nature hence these words categorized the tweet into positive.

Data Summary

The original dataset has 6 columns and 41157 rows. In order to analyze various sentiments, We require just two columns named Original Tweet and Sentiment. There are five types of sentiments- Extremely Negative, Negative, Neutral, Positive, and Extremely Positive as you can see in the following picture.

Summary Of Dataset

 

Basic Exploratory Data Analysis

The columns such as “UserName” and “ScreenName” do not give any meaningful insights for our analysis. Hence we are not using these features for model building. All the tweets data collected from the months of March and April 2020. The following Bar plot shows us the number of unique values in each column.

There are some null values in the location column but we don’t need to deal with them as we are just going to use two columns i.e. “Sentiment” and “Original Tweet”. Maximum tweets came from London(11.7%) location as evident from the following figure.

There are some words like ‘coronavirus’, ‘grocery store’, having the maximum frequency in our dataset. We can see it from the following word cloud. There are various #hashtags in the tweets column. But they are almost the same in all sentiments hence they are not giving us meaningful full information.

World Cloud showing the words having a maximum frequency in our Tweet column

When we try to explore the ‘Sentiment’ column, we came to know that most of the peoples are having positive sentiments about various issues shows us their optimism during pandemic times. Very few people are having extremely negatives thoughts about Covid-19.

 

Data Pre-processing

The preprocessing of the text data is an essential step as it makes the raw text ready for mining. The objective of this step is to clean noise those are less relevant to find the sentiment of tweets such as punctuation(.,?,” etc.), special characters(@,%,&,$, etc.), numbers(1,2,3, etc.), tweeter handle, links(HTTPS: / HTTP:)and terms which don’t carry much weightage in context to the text.

Also, we need to remove stop words from tweets. Stop words are those words in natural language that have very little meaning, such as “is”, “an”, “the”, etc. To remove stop words from a sentence, you can divide your text into words and then remove the word if it exists in the list of stop words provided by NLTK.

Then we need to normalize tweets by using Stemming or Lemmatization. “Stemming” is a rule-based process of stripping the suffixes (“ing”, “ly”, “es”, “ed”, “s” etc) from a word. For example — “play”, “player”, “played”, “plays” and “playing” are the different variations of the word — “play”.

Stemming will not convert original words into meaningful words. As you can see “considered” gets stemmed into “condit” which does not have meaning and a spelling mistake too. The better way is to use Lemmatization instead of stemming process.

Lemmatization is a more powerful operation, and it takes into consideration the morphological analysis of the words. It returns the lemma which is the base form of all its inflectional forms.

 

Here in the Lemmatization process, we are converting the word “raising” to its basic form “raise”. We also need to convert all tweets into the lower case before we do the normalization process.

We can include the process of tokenization. In tokenization, we convert a group of sentences into tokens. It is also called text segmentation or lexical analysis. It is basically splitting data into a small chunk of words. Tokenization in python can be done by the python NLTK library’s word_tokenize() function.

Vectorization

We can use a count vectorizer or a TF-IDF vectorizer. Count Vectorizer will create a sparse matrix of all words and the number of times they are present in a document.

TFIDF, short for term frequency-inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. The TF–IDF value increases proportionally to the number of times a word appears in the document and is offset by the number of documents in the corpus that contain the word, which helps to adjust for the fact that some words appear more frequently in general. (wiki)

Building Classification Models

The given problem is Ordinal Multiclass classification. There are five types of sentiments so we have to train our models so that they can give us the correct label for the test dataset. I am going to built different models like Naive Bayes, Logistic Regression, Random Forest, XGBoost, Support Vector Machines, CatBoost, and Stochastic Gradient Descent.

I have used the given problem of Multiclass Classification that is dependent variable has the values -Positive, Extremely Positive, Neutral, Negative, Extremely Negative. I also convert this problem into binary classification i.e. I clubbed all tweets into just two types Positive and Negative. You can also go for three-class classification i.e. Positive, Negative and Neutral in order to achieve greater accuracy. In the evaluation phase, we will be comparing the results of these algorithms.

Feature Importance

The feature importance (variable importance) describes which features are relevant. It can help with a better understanding of the solved problem and sometimes lead to model improvements by employing feature selection. The top three important feature words are panic, crisis, and scam as we can see from the following graph.

Conclusion

In this way, we can explore more from various textual data and tweets. Our models will try to predict the various sentiments correctly. I have used various models for training our dataset but some models show greater accuracy while some do not. For multiclass classification, the best model for this dataset would be CatBoost. For binary classification, the best model for this dataset would be Stochastic Gradient Descent.

More Details: Covid Tracket on Twitter using Data Science and AI

Submitted By


Smart Health Monitoring App

Last Updated on May 3, 2021

About

The proposed solution will be an online mobile based application. This app will contain information regarding pre and post maternal session. The app will help a pregnant lady to know about pregnancy milestone and when to worry and when to not. According to this app, user needs to register by entering name, age, mobile number and preferred language. The app will be user friendly making it multi-lingual and audio-video guide to help people who have impaired hearing or sight keeping in mind women who reside in rural areas and one deprived of primary education. The app will encompass two sections pre-natal and post- natal.

           In case of emergency i.e. when the water breaks (indication) there will be a provision to send emergency message (notification) that will be sent to FCM (Firebase Cloud Messaging), it then at first tries to access the GPS settings in cell, in case the GPS isn’t on, Geolocation API will be used. Using Wi-Fi nodes that mobile device can detect, Internet, Google’s datasets, nearby towers, a precise location is generated and sent via Geocoding to FCM, that in turn generates push notifications, and the tokens will be sent to registered user’s, hospitals, nearby doctors, etc. and necessary actions will be implemented, so that timely            help will be provided

More Details: Smart Health Monitoring App

Submitted By


Covid-19

Last Updated on May 3, 2021

About

#!/usr/bin/env python

# coding: utf-8


# In[1]:



import datetime as dt

from datetime import datetime, timedelta

import numpy as np

import pandas as pd

import time as tm

import warnings

warnings.filterwarnings("ignore")

import matplotlib.pyplot as plt


import requests

import json

import itertools 

today = dt.date.today()

print(today)



# In[2]:



country = requests.get('https://api.coronatracker.com/v3/stats/worldometer/topCountry')

world = json.loads(country.text)

world = pd.io.json.json_normalize(world)



# In[3]:



cases_world = pd.pivot_table(world,index=['country'],columns=None,

        values=['totalConfirmed','totalRecovered','totalDeaths']).reset_index()

cases_world['last_update'] = dt.datetime.now()

incases = cases_world[cases_world['country']=='India'].reset_index().drop(columns=['index'])

incr = incases['totalConfirmed'][0]

inre = incases['totalRecovered'][0]

inde = incases['totalDeaths'][0]



# In[4]:



india_datewise = requests.get('https://api.covid19india.org/data.json')


in_date = json.loads(india_datewise.text)['cases_time_series']

in_date = pd.io.json.json_normalize(in_date)

in_date = in_date.iloc[0:len(in_date),[3,0,2,1,4,6,5]]



# In[5]:



in_date['day'] = in_date['date'].str.split(" ",expand=True)[0]

in_date['months'] = in_date['date'].str.split(" ",expand=True)[1]

in_date['year'] = 2020

in_date['month']=0



# In[6]:



for x in range(len(in_date['months'])):

  if in_date['months'][x]=='January':

    in_date['month'][x] = str(1)

   

  elif in_date['months'][x]=='February':

    in_date['month'][x] = str(2)

   

  elif in_date['months'][x]=='March':

    in_date['month'][x] = str(3)

   

  elif in_date['months'][x]=='April':

    in_date['month'][x] = str(4)

   

  elif in_date['months'][x]=='May':

    in_date['month'][x] = str(5)

   

  elif in_date['months'][x]=='June':

    in_date['month'][x] = str(6)

   

  elif in_date['months'][x]=='July':

    in_date['month'][x] = str(7)

   

  elif in_date['months'][x]=='August':

    in_date['month'][x] = str(8)

   

  elif in_date['months'][x]=='September':

    in_date['month'][x] = str(9)

   

  elif in_date['months'][x]=='October':

    in_date['month'][x] = str(10)

   

  elif in_date['months'][x]=='November':

    in_date['month'][x] = str(11)

   

  elif in_date['months'][x]=='December':

    in_date['month'][x] = str(12)

     

in_date['year'] = in_date['year'].astype(str)

in_date['month'] = in_date['month'].astype(str)

in_date['day'] = in_date['day'].astype(str)

in_date['Date'] = in_date['year'].astype(str)+'-'+in_date['month']+'-'+in_date['day']

in_date['Date'] = pd.to_datetime(in_date['Date'])

in_date['Date'] = in_date['Date'].astype(str)

in_date = in_date[['Date','dailyconfirmed','dailyrecovered','dailydeceased','totalconfirmed','totalrecovered','totaldeceased']]

in_date = in_date.sort_values(by='Date',ascending=False)

last7 = str(today - dt.timedelta(days=+7))



# In[7]:



last7days = in_date[in_date['Date']>=last7]

last7days









# In[8]:



india_statewise = requests.get('https://api.covid19india.org/data.json')


in_state = json.loads(india_statewise.text)['statewise']

in_state = pd.io.json.json_normalize(in_state)

in_state = in_state.iloc[0:len(in_state),[9,6,0,2,1,4,5,7,8,10,11]]

in_state









# In[9]:



from flask import Flask, render_template


app = Flask(__name__)




@app.route("/")

def home():

  return render_template('index.html', total=incr, recovered=inre, death=inde,

              column_names=in_state.columns.values, row_data=list(in_state.values.tolist()),

              link_column="cases", zip=zip)


if __name__ == "__main__":

  app.run()


details of project:-

Python is a general purpose, dynamic, high-level, and interpreted programming language. It supports Object Oriented programming approach to develop applications. It is simple and easy to learn and provides lots of high-level data structures.

•Python is not intended to work in a particular area, such as web programming. That is why it is known as multipurpose programming language because it can be used with web, enterprise, 3D CAD, etc.

•Python laid its foundation in the late 1980s.

•The implementation of Python was started in December 1989 by Guido Van Rossum at CWI in Netherland.

•In February 1991, Guido Van Rossum published the code (labeled version 0.9.0) to alt.sources.

In 1994, Python 1.0 was released with new features like lambda, map, filter, and reduce


•Python provides many useful features which make it popular and valuable from the other programming languages. It supports object-oriented programming, procedural programming approaches and provides dynamic memory allocation. We have listed below a few essential features.

•Easy to Learn and Use

•Expressive Language

•Interpreted Language

•Cross-platform Language

•Free and Open Source

•Object-Oriented Language

•Extensible

• Large Standard Library

•GUI Programming Support

•Integrated

•Embeddable

•Dynamic Memory Allocation



•Python is known for its general-purpose nature that makes it applicable in almost every domain of software development. Python makes its presence in every emerging field. It is the fastest-growing programming language and can develop any application.


python library:-

•Numpy

•Pandas

•Matplotlibraries

Numpy:-Numpy is considered as one of the most popular machine learning library in Python.

Features Of Numpy:-

Interactive: Numpy is very interactive and easy to use.

Mathematics: Makes complex mathematical implementations very simple.

Intuitive: Makes coding real easy and grasping the concepts is easy.

Lot of Interaction: Widely used, hence a lot of open source contribution.


Pandas:-Pandas is a machine learning library in Python that provides data structures of high-level and a wide variety of tools for analysis. One of the great feature of this library is the ability to translate complex operations with data using one or two commands. 

Features Of Pandas:-Pandas make sure that the entire process of manipulating data will be easier. Support for operations such as Re-indexing, Iteration, Sorting, Aggregations, Concatenations and Visualizations are among the feature highlights of Pandas.

Matplotlibraries:-Matplotlib is an amazing visualization library in Python for 2D plots of arrays. Matplotlib is a multi-platform data visualization library built on NumPy arrays and designed to work with the broader SciPy stack. It was introduced by John Hunter in the year 2002. Matplotlib consists of several plots like line, bar, scatter, histogram etc.


Datetime:-A date in Python is not a data type of its own, but we can import a module named datetime to work with dates as date objects.

•Import the datetime module and display the current date: import datetime. ...

•Return the year and name of weekday: import datetime. ...

•Create a date object: import datetime. ...

•Display the name of the month:

Warnings:-The warnings module was introduced in PEP 230 as a way to warn programmers about changes in language or library features in anticipation of backwards incompatible changes coming with Python 3.0. Since warnings are not fatal, a program may encounter the same warn-able situation many times in the course of running.

Requests:-Requests is a Python HTTP library, released under the Apache License 2.0. The goal of the project is to make HTTP requests simpler and more human-friendly. 

Json:-The json library can parse JSON from strings or files. The library parses JSON into a Python dictionary or list. It can also convert Python dictionaries or lists into JSON strings.

Itertools:-


Flask:-


•Flask is a web application framework written in Python. Armin Ronacher, who leads an international group of Python enthusiasts named Pocco, develops it. Flask is based on Werkzeug WSGI toolkit and Jinja2 template engine. Both are Pocco projects.

Getting Started With Flask:

Python 2.6 or higher is required for the installation of the Flask. You can start by import Flask from the flask package on any python IDE




More Details: covid-19

Submitted By


Interact With Quantum Computing Hardware Devices Using Amazon Bracket

Last Updated on May 3, 2021

About

The Amazon Braket Python SDK is an open source library that provides a framework that you can use to interact with quantum computing hardware devices through Amazon Braket.

Prerequisites

Before you begin working with the Amazon Braket SDK, make sure that you've installed or configured the following prerequisites.


Python 3.7.2 or greater

Download and install Python 3.7.2 or greater from Python.org.


Git

Install Git from https://git-scm.com/downloads. Installation instructions are provided on the download page.


IAM user or role with required permissions

As a managed service, Amazon Braket performs operations on your behalf on the AWS hardware that is managed by Amazon Braket. Amazon Braket can perform only operations that the user permits. You can read more about which permissions are necessary in the AWS Documentation.

The Braket Python SDK should not require any additional permissions aside from what is required for using Braket. However, if you are using an IAM role with a path in it, you should grant permission for iam:GetRole.

To learn more about IAM user, roles, and policies, see Adding and Removing IAM Identity Permissions.


Boto3 and setting up AWS credentials

Follow the installation instructions for Boto3 and setting up AWS credentials.

Note: Make sure that your AWS region is set to one supported by Amazon Braket. You can check this in your AWS configuration file, which is located by default at ~/.aws/config.


Configure your AWS account with the resources necessary for Amazon Braket

If you are new to Amazon Braket, onboard to the service and create the resources necessary to use Amazon Braket using the AWS console.


Installing the Amazon Braket Python SDK

The Amazon Braket Python SDK can be installed with pip as follows:

pip install amazon-braket-sdk

You can also install from source by cloning this repository and running a pip install command in the root directory of the repository:

git clone https://github.com/aws/amazon-braket-sdk-python.git
cd amazon-braket-sdk-python
pip install .


Check the version you have installed

You can view the version of the amazon-braket-sdk you have installed by using the following command:

pip show amazon-braket-sdk

You can also check your version of amazon-braket-sdk from within Python:

>>> import braket._sdk as braket_sdk
>>> braket_sdk.__version__

Usage


Running a circuit on an AWS simulator

import boto3
from braket.aws import AwsDevice
from braket.circuits import Circuit

device = AwsDevice("arn:aws:braket:::device/quantum-simulator/amazon/sv1")
s3_folder = ("amazon-braket-Your-Bucket-Name", "folder-name") # Use the S3 bucket you created during onboarding

bell = Circuit().h(0).cnot(0, 1)
task = device.run(bell, s3_folder, shots=100)
print(task.result().measurement_counts)

The code sample imports the Amazon Braket framework, then defines the device to use (the SV1 AWS simulator). The s3_folder statement defines the Amazon S3 bucket for the task result and the folder in the bucket to store the task result. This folder is created when you run the task. It then creates a Bell Pair circuit, executes the circuit on the simulator and prints the results of the job.

More Details: Interact with Quantum Computing Hardware Devices using Amazon Bracket

Submitted By