Determining Ethnicity Using Cnn On Image Data

Last Updated on May 3, 2021

About

Ranked Silver in Kaggle competition for predicting ethnicity from extremely low resolution images (28x28)

More Details: Determining Ethnicity using CNN on Image Data

Submitted By


Share with someone who needs it

Human Computer Interaction Using Iris,Head And Eye Detection

Last Updated on May 3, 2021

About

HCI stands for the human computer interaction which means the interaction between the humans and the computer.

We need to improve it because then only it would improve the user interaction and usability. A richer design would encourage users and a poor design would keep the users at bay.

We also need to design for different categories of people having different age,color,gender etc. We need to make them accessible to older people.

It is our moral responsibility to make it accessible to disabled people.

So this project tracks our head ,eye and iris to detect the eye movement by using the viola Jones algorithm.But this algorithm does not work with our masks on as it calculated the facial features to calculate the distance.

It uses the eucledian distance to calculate the distance between the previous frame and the next frame and actually plots a graph.

It also uses the formula theta equals tan inverse of b/a to calculate the deviation.

Here we are using ANN algorithm because ANN can work with incomplete data. Here we are using constructive or generative neural networks which means it starts capturing our individual images at the beginning to create our individual patterns and track the eye.

Here we actually build the neural network and train it to predict

Finally we convert it to mouse direction and clicks and double clicks on icons and the virtual keyboard.

As a contributing or moral individuals it is our duty to make devices compatible with all age groups and differently abled persons.

More Details: Human Computer Interaction using iris,head and eye detection

Submitted By


Heart Attack Prediction

Last Updated on May 3, 2021

About

I did this project in the first semester of my MTech studies at Ahmedabad University. This project is all about predicting the heart attack based on different parameters such as cholesterol, bp, exercise, age, sex, chest pain type, slope, etc. The dataset size was 27 kb. It had 13 columns and 303 rows, I got this dataset from Kaggle. First I did data cleaning in which I removed outliers, null values, duplicate values. After that, I did some data visualization to get insight from the data. During the data visualization, some insights I got from the data were people mostly aged above 40 are suffering/ suffered from a heart attack once in their life, heart rate and chest pain are highly correlated with a heart attack, stress and cholesterol are also one of the main factors of a heart attack, we can see that the patient suffering from heart disease have high cholesterol as compared to the patient not suffering from heart disease. In this project, I have used different machine learning algorithms to predict the Heart attack. I used Logistic regression in which I got 85% accuracy, and decision tree I got 72% accuracy. In the end, there is a decision tree that shows the parameters affecting in order of correlation.

More Details: Heart Attack Prediction

Submitted By


Navassist Ai

Last Updated on May 3, 2021

About

Incorporating machine learning, and haptic feedback, NavAssistAI detects the position and state of a crosswalk light, which enables it to aid the visually impaired in daily navigation.

Inspiration

One day, we were perusing youtube looking for an idea for our school's science fair. On that day, we came across a blind YouTuber named Tommy Edison. He had uploaded a video of himself attempting to cross a busy intersection on his own. It was apparent that he was having difficulty, and at one point he almost ran into a street sign. After seeing his video, we decided that we wanted to leverage new technology to help people like Tommy in daily navigation, so we created NavAssist AI.

What it does

In essence, NavAssist AI uses object detection to detect both the position and state of a crosswalk light (stop hand or walking person). It then processes this information and relays it to the user through haptic feedback in the form of vibration motors inside a headband. This allows the user to understand whether it is safe to cross the street or not, and in which direction they should face when crossing.

How we built it

We started out by gathering our own dataset of 200+ images of crosswalk lights because there was no existing library for those images. We then ran through many iterations on many different models, training each model on this data set. Through the different model architectures and iterations, we strove to find a balance between accuracy and speed. We eventually discovered an SSDLite Mobilenet model from the TensorFlow model zoo had the balance we required. Using transfer learning and many iterations we trained a model that finally worked. We implemented it onto a raspberry pi with a camera, soldered on a power button and vibration motors, and custom-designed a 3D printed case with room for a battery. This made our prototype wearable device.

Challenges we ran into

When we started this project, we knew nothing about machine learning or TensorFlow and had to start from scratch. However, with some googling and trying stuff out, we were able to figure out how to implement TensorFlow for our project with relative ease. Another challenge was collecting, preparing, and labelling our data set of 200+ images. Although, our most important challenge was not knowing what it's like to be visually impaired. To overcome this, we had to go out to people in the blind community and talk to them so that we could properly understand the problem and create a good solution.

Accomplishments that we're proud of

  • Making our first working model that could tell the difference between stop and go
  • Getting the haptic feedback implementation to work with the Raspberry Pi
  • When we first tested the device and successfully crossed the street
  • When we presented our work at TensorFlow World 2019

All of these milestones made us very proud because we are progressing towards something that could really help people in the world.

What we learned

Throughout the development of this project, we learned so much. Going into it, we had no idea what we were doing. Along the way, we learned about neural networks, machine learning, computer vision, as well as practical skills such as soldering and 3D CAD. Most of all, we learned that through perseverance and determination, you can make progress towards helping to solve problems in the world, even if you don't initially think you have the resources or knowledge.

What's next for NavAssistAI

We hope to expand its ability for detecting objects. For example, we would like to add detection for things such as obstacles so that it may aid in more than just crossing the street. We are also working to make the wearable device smaller and more portable, as our first prototype can be somewhat burdensome. In the future, we hope to eventually reach a point where it will be marketable, and we can start helping people everywhere.

More Details: NavAssist AI

Submitted By


Social Distance Monitoring System(Python, Deep Learning And Opencv)(Research Paper)

Last Updated on May 3, 2021

About

Social distancing is one of the community mitigation measures that may be recommended during Covid-19 pandemics. Social distancing can reduce virus transmission by increasing physical distance or reducing frequency of congregation in socially dense community settings, such as ATM,Airport Or market place .

Covid-19 pandemics have demonstrated that we cannot expect to contain geographically the next influenza pandemic in the location it emerges, nor can we expect to prevent international spread of infection for more than a short period. Vaccines are not expected to be available during the early stage of the next pandemic (1), a Therefore, we came up with this system to limit the spread of COVID via ensuring social distancing among people. It will use cctv camera feed to identify social distancing violations

We are first going to apply object detection using a YOLOv3 model trained on a coco dataset that has 80 classes. YOLO uses darknet frameworks to process incoming feed frame by frame. It returns the detections with their IDs, centroids, corner coordinates and the confidences in the form of multidimensional ndarrays. We receive that information and remove the IDs that are not a “person”. We will draw bounding boxes to highlight the detections in frames. Then we use centroids to calculate the euclidean distance between people in pixels. Then we will check if the distance between two centroids is less than the configured value then the system will throw an alert with a beeping sound and will turn the bounding boxes of violators to red.

Research paper link: https://ieeexplore.ieee.org/document/9410745

More Details: Social Distance Monitoring System(Python, Deep Learning And OpenCV)(Research paper)

Submitted By


Face Detection And Recognise

Last Updated on May 3, 2021

About

Project is Build In three parts as follows:-



  1. Face Dataset:- Recognise the face of a person and takes 30 Images of that`s persons Face.

Code as Follow

''''
Capture multiple Faces from multiple users to be stored on a DataBase (dataset directory)
   ==> Faces will be stored on a directory: dataset/ (if does not exist, pls create one)
   ==> Each face will have a unique numeric integer ID as 1, 2, 3, etc                       


Developed by Ujwal,Yash,adinath,sanket under guidance of prianka ma`am group no. 49

'''

import cv2
import os

cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video width
cam.set(4, 480) # set video height

face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

# For each person, enter one numeric face id
face_id = input('\n enter user id end press <return> ==>  ')

print("\n [INFO] Initializing face capture. Look the camera and wait ...")
# Initialize individual sampling face count
count = 0

while(True):

    ret, img = cam.read()
    img = cv2.flip(img, 1) # flip video image vertically
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_detector.detectMultiScale(gray, 1.3, 5)

    for (x,y,w,h) in faces:

        cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)     
        count += 1

        # Save the captured image into the datasets folder
        cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])

        cv2.imshow('image', img)

    k = cv2.waitKey(100) & 0xff # Press 'ESC' for exiting video
    if k == 27:
        break
    elif count >= 30: # Take 30 face sample and stop video
         break

# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()


2.Face Training:- Trains the already taken 30 images and convert into gray scale. For easy detection

i have applied the code for a Reduction of Noice in image.

Code as Follow:

''''
Training Multiple Faces stored on a DataBase:
   ==> Each face should have a unique numeric integer ID as 1, 2, 3, etc                       
   ==> LBPH computed model will be saved on trainer/ directory. (if it does not exist, pls create one)
   ==> for using PIL, install pillow library with "pip install pillow"



Developed by Ujwal,Yash,adinath,sanket under gidence of prianka ma`am group no.49

'''

import cv2
import numpy as np
from PIL import Image
import os

# Path for face image database
path = 'dataset'

recognizer = cv2.face.LBPHFaceRecognizer_create()
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml");

# function to get the images and label data
def getImagesAndLabels(path):

    imagePaths = [os.path.join(path,f) for f in os.listdir(path)]     
    faceSamples=[]
    ids = []

    for imagePath in imagePaths:

        PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale
        img_numpy = np.array(PIL_img,'uint8')

        id = int(os.path.split(imagePath)[-1].split(".")[1])
        faces = detector.detectMultiScale(img_numpy)

        for (x,y,w,h) in faces:
            faceSamples.append(img_numpy[y:y+h,x:x+w])
            ids.append(id)

    return faceSamples,ids

print ("\n [INFO] Training faces. It will take a few seconds. Wait ...")
faces,ids = getImagesAndLabels(path)
recognizer.train(faces, np.array(ids))

# Save the model into trainer/trainer.yml
recognizer.write('trainer/trainer.yml') # recognizer.save() worked on Mac, but not on Pi

# Print the numer of faces trained and end program
print("\n [INFO] {0} faces trained. Exiting Program".format(len(np.unique(ids))))


3.Face Recognition:- Recognise The face if and only if that`s person images are present in a datase.

otherwise it shows unkwon person.If the person is recognisable by system then it

shows the match in persentage.

Code as Follow:

''''
Real Time Face Recogition
   ==> Each face stored on dataset/ dir, should have a unique numeric integer ID as 1, 2, 3, etc                       
   ==> LBPH computed model (trained faces) should be on trainer/ dir


Developed by Ujwal,Yash,adinath,sanket under gidence of prianka ma`am Group no.49

'''

import cv2
import numpy as np
import os 

recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read('trainer/trainer.yml')
cascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath);

font = cv2.FONT_HERSHEY_SIMPLEX

#iniciate id counter
id = 0

# names related to ids: example ==> Marcelo: id=1,  etc
names = ['None','Ujwal','Yash','Adinath','Sanket']

# Initialize and start realtime video capture
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video widht
cam.set(4, 480) # set video height

# Define min window size to be recognized as a face
minW = 0.1*cam.get(3)
minH = 0.1*cam.get(4)

while True:

    ret, img =cam.read()
    # img = cv2.flip(img, -1) # Flip vertically

    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    faces = faceCascade.detectMultiScale( 
        gray,
        scaleFactor = 1.2,
        minNeighbors = 5,
        minSize = (int(minW), int(minH)),
       )

    for(x,y,w,h) in faces:

        cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)

        id, confidence = recognizer.predict(gray[y:y+h,x:x+w])

        # Check if confidence is less them 100 ==> "0" is perfect match 
        if (confidence < 100):
            id = names[id]
            confidence = "  {0}%".format(round(100 - confidence))
        else:
            id = "unknown"
            confidence = "  {0}%".format(round(100 - confidence))
        
        cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2)
        cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1)  
    
    cv2.imshow('camera',img) 

    k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video
    if k == 27:
        break

# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()


More Details: Face Detection And Recognise

Submitted By