Deep Learning : Ann

Last Updated on May 3, 2021

About

I have learned Deep learning using tensorflow & keras and build simple project.

More Details: Deep Learning : ANN

Submitted By


Share with someone who needs it

Bank_Loan_Default_Case

Last Updated on May 3, 2021

About

The Objective of this problem is to predict whether a person is ‘Defaulted’ or ‘Not Defaulted’ on the basis of the given 8 predictor variables.

The data consists of 8 Independent Variables and 1 dependent variable. The Independent Variables are I. Age: It is a continuous variable. This feature depicts the age of the person. II. Ed: It is a categorical variable. This feature has the education category of the person converted to numerical form. III. Employ: It is a categorical variable. This feature contains information about the geographic location of the person. This column has also been converted to numeric values. IV. Income: It is a continuous variable. This feature contains the gross income of each person. V. DebtInc: It is a continuous variable. This feature tells us an individual’s debt to his or her gross income. VI. Creddebt: It is a continuous variable. This feature tells us about the debt-to-credit ratio. It is a measurement of how much a person owes their creditors as a percentage of its available credit. VII. Othdebt: It is a continuous variable. It tells about any other debt a person owes. VIII. Default: It is a categorical variable. It tells whether a person is a Default (1) or Not-Default (0).

After performing extensive exploratory data analysis the data is given to multiple models like Logistic Regression, Decision Tree classifier, Random Forest classifier, KNN, Gradient Boosting classifier with and without hyperparameter tuning, the final results are obtained and compared on metrics like precision score, recall score, AUC-ROC score.

More Details: Bank_Loan_Default_Case

Submitted By


Vaccine Prediction

Last Updated on May 3, 2021

About

Can you predict whether people got H1N1 and seasonal flu vaccines using information they shared about their backgrounds, opinions, and health behaviors?

In this challenge, we will take a look at vaccination, a key public health measure used to fight infectious diseases. Vaccines provide immunization for individuals, and enough immunization in a community can further reduce the spread of diseases through "herd immunity."

As of the launch of this competition, vaccines for the COVID-19 virus are still under development and not yet available. The competition will instead revisit the public health response to a different recent major respiratory disease pandemic. Beginning in spring 2009, a pandemic caused by the H1N1 influenza virus, colloquially named "swine flu," swept across the world. Researchers estimate that in the first year, it was responsible for between 151,000 to 575,000 deaths globally.

A vaccine for the H1N1 flu virus became publicly available in October 2009. In late 2009 and early 2010, the United States conducted the National 2009 H1N1 Flu Survey. This phone survey asked respondents whether they had received the H1N1 and seasonal flu vaccines, in conjunction with questions about themselves. These additional questions covered their social, economic, and demographic background, opinions on risks of illness and vaccine effectiveness, and behaviors towards mitigating transmission. A better understanding of how these characteristics are associated with personal vaccination patterns can provide guidance for future public health efforts.

I have created two model, one for H1N1 and another for Seasonal Vaccine.

More Details: Vaccine prediction

Submitted By


Human Computer Interaction Using Iris,Head And Eye Detection

Last Updated on May 3, 2021

About

HCI focus on interfaces between people and computers and how to design, evaluate, and implement interactive computer systems that satisfy the user. The human–computer interface can be described as the point of communication between the human user and the computer. The flow of information between the human and computer is defined as the loop of interaction. It deals with the design, execution and assessment of computer systems and related phenomenon that are for human use. HCI process is completed by applying a digital signal processing system which takes the analog input from the user by using dedicated hardware (Web Camera) with software.

Eye tracking is the process of measuring either the point of gaze(where one is looking)or the motion of the eye relative to the head. An eye tracker is a device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, in marketing, as an input device for human computer interaction, and in product design. There are a number of methods for measuring the eye movement. The most popular variant uses video images from which the eye positions are extracted. Eye movement are made using direct observations. It is observed that reading does not involve a smooth sweeping of the eyes along the text, as previously assumed, but a series of short stops(called fixations).All the records show conclusively that the character of the eye movement is either completely independent of or only very slightly dependent on the material of the picture and how it is made. The cyclical pattern in the examination of the picture is dependent not only on what is shown on the picture, but also on the problem facing the observer and the information that ones hopes to get from the picture. Eye movement reflects the human thought process; so the observers thought may be followed to some extent from records of eye movement. It is easy to determine from these records from which the elements attract the observers eye in what order, and how often.

We build a neural network here and there are two types of network:Feed-forward networkFeed back network


Using video-oculography, horizontal and vertical eye movements tend to be easy to characterize, because they can be directly deduced from the position of the pupil. Torsional movements, which are rotational movements about the line of sight, are rather more difficult to measure; they cannot be directly deduced from the pupil, since the pupil is normally almost round and thus rotationally invariant. One effective way to measure torsion is to add artificial markers (physical markers, corneal tattoos, scleral markings, etc.) to the eye and then track these markers. However, the invasive nature of this approach tends to rule it out for many applications. Non-invasive methods instead attempt to measure the rotation of the iris by tracking the movement of visible iris structures.

Methodology

To measure a torsional movement of the iris, the image of the iris is typically transformed into polar co-ordinates about the center of the pupil; in this co-ordinate system, a rotation of the iris is visible as a simple translation of the polar image along the angle axis. Then, this translation is measured in one of three ways: visually, by using cross-correlation or template matching, or by tracking the movement of iris features. Methods based on visual inspection provide reliable estimates of the amount of torsion, but they are labour intensive and slow, especially when high accuracy is required. It can also be difficult to do visual matching when one of the pictures has an image of an eye in an eccentric gaze position.

If instead one uses a method based on cross-correlation or template matching, then the method will have difficulty coping with imperfect pupil tracking, eccentric gaze positions, changes in pupil size, and non-uniform lighting. There have been some attempts to deal with these difficulties but even after the corrections have been applied, there is no guarantee that accurate tracking can be maintained. Indeed, each of the corrections can bias the results.

The remaining approach, tracking features in the iris image, can also be problematic. Features can be marked manually, but this process is time intensive, operator dependent, and can be difficult when the image contrast is low. Alternatively, one can use small local features like edges and corners. However, such features can disappear or shift when the lighting and shadowing on the iris changes, for example, during an eye movement or a change in ambient lighting. This means that it is necessary to compensate for the lighting in the image before calculating the amount of movement of each local feature.

In our application of the Maximally Stable Volumes detector, we choose the third dimension to be time, not space, which means that we can identify two-dimensional features that persist in time. The resulting features are maximally stable in space (2-D) and time (1-D), which means that they are 3-D intensity troughs with steep edges. However, the method of Maximally Stable Volumes is rather memory intensive, meaning that it can only be used for a small number of frames (in our case, 130 frames) at a time. Thus, we divide up the original movie into shorter overlapping movie segments for the purpose of finding features. We use an overlap of four frames, since the features become unreliable at the ends of each sub-movie. We set the parameters of the Maximally Stable Volumes detector such that we find almost all possible features. Of these features, we only use those that are near to the detected pupil center (up to 6 mm away) and small (smaller than roughly 1% of the iris region). We remove features that are large in angular extent (the pupil and the edges of the eyelids), as well as features that are further from the pupil than the edges of the eyelids (eyelashes).

We have used to track the eye movement and convert to mouse direction using eucledian distance which would greatly help the disabled people.I have also implemented a virtual keyboard.



More Details: Human Computer Interaction using iris,head and eye detection

Submitted By


Bakery Management Api

Last Updated on May 3, 2021

About

# Bakery Management System


This **Bakery Management System** is based on Django Rest framework and Uses the Token Authentications.

For better understanding read the documentation at [docs](https://documenter.getpostman.com/view/14584052/TWDdjZYb).<br>

The project is live at [Bakery](https://bakery-management-api.herokuapp.com/).


## Steps to run the API:


1. Install requirements.txt 

2. Run- python manage.py makemigrations

3. Run- python manage.py migrate

4. Run- python manage.py runserver 




## API-OVERVIEW


Now enter http://127.0.0.1:8000/ in your Browser this will give the details about the functionality offered.

To perform any of the operations mentioned just add the corresponding relative url to this http://127.0.0.1:8000/ .


***Note : All the endpoints corresponding to Ingredients and Dishes are only accessible to ADMIN.***





## MANAGING THE ACCOUNTS REGISTRATION AND AUTHENTICATION




### Registering a ADMIN


We can only register admin through the Django admin panel. To acces Django Admin panel you have to create a superuser

Follows these steps to register an ADMIN user:

1. python manage.py createsuperuser

2. Fill all details(username ,email and pasword)

3. Now got to http://127.0.0.1:8000/admin/ and login through the credentials you just entered.

4. Register admin through the USERS section(please tick the is_staff then only you will be considered as ADMIN)



### Registering a CUSTOMER


URL - http://127.0.0.1:8000/accounts/register/ REQUEST-TYPE =[POST] **:**


This uses POST request and expects username,email,password,first_name,last_name to be entered through JSON object or a Form data.The username needs to be UNIQUE



### LOGGING IN A USER


URL - http://127.0.0.1:8000/accounts/login/ REQUEST-TYPE =[POST] **:**


This uses POST request and expects username and password.After successfull login this will return a Token and Expiry.

Expiry denotes for how long is the token valid ,after the expiry you need to login again.



### LOGGING OUT A USER


URL - http://127.0.0.1:8000/accounts/logout/ REQUEST-TYPE =[] **:**

For this provide the token in the header.The user whose token you entered will be logged out.






## OPERATIONS ON INGREDIENTS(ACCESSIBLE ONLY TO ADMINS)



### Adding an Ingredient


URL - http://127.0.0.1:8000/ingredients/ REQUEST-TYPE =[POST]  **:**

This uses POST request and expects name,quantity,quantity_type,cost_price to be entered through JSON object or a Form data.The name needs to be UNIQUE

and Django adds a primary key by name "id" by default.The quantity_type contains three choices only in which you can enter a single one either 'kg' for

kilogram ,'lt' for litre and "_" for only numbers.



### Get list of all Ingredients


URL - http://127.0.0.1:8000/ingredients/  REQUEST-TYPE =[GET]  **:**

This returns a Json value containing the list of all ingredients.



### Getting details of a single Ingredients


URL - http://127.0.0.1:8000/ingredients/id/ REQUEST-TYPE =[GET]  **:**

The "id" mentioned in the above url must be an integer referring to the "id" of the ingredient you want to fetch.This returns details of the 

ingredient you mentioned.



### Deleting a single Ingredients


URL - http://127.0.0.1:8000/ingredients/id/ REQUEST-TYPE =[DELETE]  **:**

The "id" mentioned in the above url must be an integer referring to the "id" of the ingredient you want to fetch.This deletes the 

ingredient you mentioned.






## OPERATIONS ON MENU(ACCESSIBLE ONLY TO ADMINS)




### Adding an dish to menu


URL - http://127.0.0.1:8000/menu/ REQUEST-TYPE =[POST]  **:**

This uses POST request and expects name , quantity , description , cost_price , selling_price , ingredients to be entered through JSON object or a Form data.

The name needs to be UNIQUE and Django adds a primary key by name "id" by default.The ingredients field can contain multiple ingredients id.



### Get list of all dishes(Available to CUSTOMER also)


URL - http://127.0.0.1:8000/menu/  REQUEST-TYPE =[GET]   **:**

This returns a Json value containing the list of details of all dishes.


***Note-This API depend on the type of User logged in. If the Customer user is logged in than this will the name and prices only***



### Getting details of a single Dish


URL - http://127.0.0.1:8000/menu/id/ REQUEST-TYPE =[GET]   **:**

The "id" mentioned in the above url must be an integer referring to the "id" of the Dish you want to fetch.This returns details of the 

Dish you mentioned.



### Deleting a single Dish


URL - http://127.0.0.1:8000/ingredients/id/ REQUEST-TYPE =[DELETE]  **:**

The "id" mentioned in the above url must be an integer referring to the "id" of the Dish you want to fetch.This deletes the 

Dish you mentioned






## OPERATIONS ON ORDER(ACCESSIBLE TO THE CUSTOMER )



### Adding/Placing an order 


URL - http://127.0.0.1:8000/order/ REQUEST-TYPE =[POST]   **:** 

This uses POST request and expects orderby,items_ordered to be entered through JSON object or a Form data.Django adds a primary key by name "id" by default.

The items_ordered field can contain multiple dishes id.



### Getting details of a single Order 


URL - http://127.0.0.1:8000/order/id/ REQUEST-TYPE =[GET]   **:**  

The "id" mentioned in the above url must be an integer referring to the "id" of the Order you want to fetch.This returns details of the 

Order you mentioned.


### Deleting a single Order 


URL - http://127.0.0.1:8000/order/ REQUEST-TYPE =[DELETE]   **:**  

The "id" mentioned in the above url must be an integer referring to the "id" of the Order you want to delete.This deletes the 

Order you mentioned


### Order History 


URL - http://127.0.0.1:8000/order/history/ REQUEST-TYPE =[GET]  **:**  

This will return the all the orders placed by the Customer making the request.(Latest first) 


More Details: Bakery Management API

Submitted By


Smart Glasses For Visually Impaired

Last Updated on May 3, 2021

About


This is our Second year Hardware & Software Tools project . We wanted to invent 

 

something that would benefit handicapped people in some way. He came up with this idea 

 

for glasses that could help blind people sense if there was an object in front of them that 

 

they might hit their head on. The white cane that they use when walking is used for helping 

 

them navigate the ground but does not do much for up above. Using an Arduino Pro Mini

 

MCU, Ultrasonic Sensor, and a buzzer, we created these glasses that will sense the distance 

 

of an object in front and beep to alert the person that something is in front of them. Simple

 

and inexpensive to make. Credit to http://robu.in for some of the parts.

 

These “Smart Glasses” are designed to help the blind people to read and translate the typed text 

which is written in the English language. These kinds of inventions consider a solution to 

motivate blind students to complete their education despite all their difficulties. Its main 

objective is to develop a new way of reading texts for blind people and facilitate their 

communication. The first task of the glasses is to scan any text image and convert it into audio 

text, so the person will listen to the audio through a headphone that’s connected to the glasses. 

The second task is to translate the whole text or some words of it by pressing a button that is 

connected to the glasses as well. The glasses used many technologies to perform its tasks which 

are OCR, (gTTS) and Google translation. Detecting the text in the image was done using the 

OpenCV and Optical Character Recognition technology (OCR) with Tesseract and Efficient and 

Accurate Scene Text Detector (EAST). In order to convert the text into speech, it used Text to 

Speech technology (gTTS). For translating the text, the glasses used Google translation API. The 

glasses are provided by Ultrasonic sensor which is used to measure the required distance 

between the user and the object that has an image to be able to take a clear picture. The picture 

will be taken when the user presses the button. Moreover, the motion sensor was used to 

introduce the user to the university’s halls, classes and labs locations using Radio-frequency 

identification (RFID) reader. All the computing and processing operations were done using the 

Raspberry Pi 3 B+ and Raspberry pi 3 B. For the result, the combination of using OCR with

EAST detector provide really high accuracy which showed the ability of the glasses to recognize 

almost 99% of the text. However, the glasses have some drawbacks such as: supporting only the 

English language and the maximum distance of capturing the images is between 40-150 cm. As a 

future plan, it is possible to support many languages and enhance the design to make it smaller 

and more comfortable to wear.

 

More Details: Smart Glasses for visually impaired

Submitted By