Covid Detection Using Ct Scans & X-Rays

Last Updated on May 3, 2021

About

This is the latest project which Me and my group implemented recently as a part of our capstone project for our Final semester of engineering.

The aim of the project was to build an android application that incorporates a Deep learning model that can detect the presence of Covid Virus via CT scan or X-rays of a patient. I was responsible for Developing the model and training it and converting the model that can be used in an android app. The method that I used was CNN along with Transfer Learning which captured low-level features from the Pretrained Model InceptionV3 and applies it to our model for faster convergence. The model was build using the Dataset from Kaggle.


The technology used were: Python 3, Tensorflow, Sklearn, Keras, CNN, Other libraries like Pandas and Numpy.

More Details: Covid Detection using CT Scans & X-Rays

Submitted By


Share with someone who needs it

Atm

Last Updated on May 3, 2021

About

Me and my friends have done this project with the help of mentor assigned to us.The project is about the performance of ATM machine developed by Python.


For this project we imported sqlite3 and tkinter as tk. We used Tkinter for GUI applications.Tkinter provides a powerful object-oriented interface to the Tk GUI toolkit. We have created user defined functions such as creating_db, insert_money, insert_atm, check_100, check_200, check_500, check_2000,wd_money, update_bal and main_page. When we run the code the GUI application is created.In this application we can see a note as 'Welcome to ATM' and in the next lines we can see 100/-,200/-,500/- &2000/- notes. If we want to insert money we can click on the option called insert money, if we want to withdraw money we can click on withdraw at the same time if we want to check the availability of respective notes we can click on Check Availability beside the notes. After checking for the availability of notes the result will be displayed on the Python shell. This Python shell is also known as REPL (Read, Evaluate, Print, Loop), where it reads the command, evaluates the command, prints the result, and loop it back to read the command again.For every insert or withdraw update will be done. By using all this we can perform the operation that is required. All this transaction details will be stored in SQLite.


I hope this would be helpful for the public.

More Details: ATM

Wholesale Management And Online Shopping Website

Last Updated on May 3, 2021

About

 Web Application for Wholesale Management System provides businesses with a simplified and strategic way of generating and receiving invoices, tracking orders, monitoring sales, purchase order and inventory maintenance.  

Inventory:

•Manage brand, category and subcategory details of products.

•Manage product details and minimum product stock details

•View product review given by online customers

•Generate inventory, product review reports.

•Manage raw material and their minimum stock details


Sales order:

•Manage sales order of Products and generate invoice for offline customers.

•Generate weekly, monthly, yearly sales summary report.

Purchase order:

•Manage purchase order for raw material and generate invoice order.

•Generate purchase summary report for purchased raw material.

 

Online order:

•Manage Online Orders

•Update Order Status of purchased product such as shipped, order place, delivered etc. 

•Generate Invoice for placed order.

 

Supplier

•Manage Supplier Details


Employee:

•Manage Employee Details

Expense:

•Manage Expense Details like rent, maintenance, light bill, Food bill etc.

•Generate weekly,monthly ,yearly Expense report.


 ·Customer

•System provides search facility on customer name, Order Placed, date of order, date of order dispatch, date of transaction, transaction amount, etc.

•System maintains details about placing order/dispatch order i.e., Order placed.

•Customers browse the Product catalog and able to search result.

•Customer is able to give Product review.

 


 


More Details: Wholesale Management and Online Shopping website

Submitted By


Human Computer Interaction Using Iris,Head And Eye Detection

Last Updated on May 3, 2021

About

HCI focus on interfaces between people and computers and how to design, evaluate, and implement interactive computer systems that satisfy the user. The human–computer interface can be described as the point of communication between the human user and the computer. The flow of information between the human and computer is defined as the loop of interaction. It deals with the design, execution and assessment of computer systems and related phenomenon that are for human use. HCI process is completed by applying a digital signal processing system which takes the analog input from the user by using dedicated hardware (Web Camera) with software.

Eye tracking is the process of measuring either the point of gaze(where one is looking)or the motion of the eye relative to the head. An eye tracker is a device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, in marketing, as an input device for human computer interaction, and in product design. There are a number of methods for measuring the eye movement. The most popular variant uses video images from which the eye positions are extracted. Eye movement are made using direct observations. It is observed that reading does not involve a smooth sweeping of the eyes along the text, as previously assumed, but a series of short stops(called fixations).All the records show conclusively that the character of the eye movement is either completely independent of or only very slightly dependent on the material of the picture and how it is made. The cyclical pattern in the examination of the picture is dependent not only on what is shown on the picture, but also on the problem facing the observer and the information that ones hopes to get from the picture. Eye movement reflects the human thought process; so the observers thought may be followed to some extent from records of eye movement. It is easy to determine from these records from which the elements attract the observers eye in what order, and how often.

We build a neural network here and there are two types of network:Feed-forward networkFeed back network


Using video-oculography, horizontal and vertical eye movements tend to be easy to characterize, because they can be directly deduced from the position of the pupil. Torsional movements, which are rotational movements about the line of sight, are rather more difficult to measure; they cannot be directly deduced from the pupil, since the pupil is normally almost round and thus rotationally invariant. One effective way to measure torsion is to add artificial markers (physical markers, corneal tattoos, scleral markings, etc.) to the eye and then track these markers. However, the invasive nature of this approach tends to rule it out for many applications. Non-invasive methods instead attempt to measure the rotation of the iris by tracking the movement of visible iris structures.

Methodology

To measure a torsional movement of the iris, the image of the iris is typically transformed into polar co-ordinates about the center of the pupil; in this co-ordinate system, a rotation of the iris is visible as a simple translation of the polar image along the angle axis. Then, this translation is measured in one of three ways: visually, by using cross-correlation or template matching, or by tracking the movement of iris features. Methods based on visual inspection provide reliable estimates of the amount of torsion, but they are labour intensive and slow, especially when high accuracy is required. It can also be difficult to do visual matching when one of the pictures has an image of an eye in an eccentric gaze position.

If instead one uses a method based on cross-correlation or template matching, then the method will have difficulty coping with imperfect pupil tracking, eccentric gaze positions, changes in pupil size, and non-uniform lighting. There have been some attempts to deal with these difficulties but even after the corrections have been applied, there is no guarantee that accurate tracking can be maintained. Indeed, each of the corrections can bias the results.

The remaining approach, tracking features in the iris image, can also be problematic. Features can be marked manually, but this process is time intensive, operator dependent, and can be difficult when the image contrast is low. Alternatively, one can use small local features like edges and corners. However, such features can disappear or shift when the lighting and shadowing on the iris changes, for example, during an eye movement or a change in ambient lighting. This means that it is necessary to compensate for the lighting in the image before calculating the amount of movement of each local feature.

In our application of the Maximally Stable Volumes detector, we choose the third dimension to be time, not space, which means that we can identify two-dimensional features that persist in time. The resulting features are maximally stable in space (2-D) and time (1-D), which means that they are 3-D intensity troughs with steep edges. However, the method of Maximally Stable Volumes is rather memory intensive, meaning that it can only be used for a small number of frames (in our case, 130 frames) at a time. Thus, we divide up the original movie into shorter overlapping movie segments for the purpose of finding features. We use an overlap of four frames, since the features become unreliable at the ends of each sub-movie. We set the parameters of the Maximally Stable Volumes detector such that we find almost all possible features. Of these features, we only use those that are near to the detected pupil center (up to 6 mm away) and small (smaller than roughly 1% of the iris region). We remove features that are large in angular extent (the pupil and the edges of the eyelids), as well as features that are further from the pupil than the edges of the eyelids (eyelashes).

We have used to track the eye movement and convert to mouse direction using eucledian distance which would greatly help the disabled people.I have also implemented a virtual keyboard.



More Details: Human Computer Interaction using iris,head and eye detection

Submitted By


Research Paper On Face Detection Using Haar Cascade Classifier

Last Updated on May 3, 2021

About

Abstract:

In the last several years, face detection has been listed as one of the most engaging fields in research. Face detection algorithms are used for the detection of frontal human faces. Face detection finds use in many applications such as face tracking, face analysis, and face recognition. In this paper, we are going to discuss face detection using a haar cascade classifier and OpenCV. In this study, we would be focusing on some of the face detection technology in use.



Conclusion:

In this study, we covered and studied in detail about face detection technique using haar cascades classifier and OpenCV to get the desired output. Using the OpenCV library, the haar cascade classifier was able to perform successful face detection with high accuracy and efficiency. We also used the OpenCV package to extract some of the features of the face to compare them. Also, we discussed some popular face detection methods. Further, we discussed the scope of face detection in the future and some of its applications. At last, we conclude that the future of facial detection technology is bright Security and surveillance is the major segments that will be deeply influenced. Other areas that are now welcoming it are private industries, public buildings, and schools

More Details: Research paper On FACE DETECTION USING HAAR CASCADE CLASSIFIER

Submitted By


Face Detection And Recognise

Last Updated on May 3, 2021

About

Project is Build In three parts as follows:-



  1. Face Dataset:- Recognise the face of a person and takes 30 Images of that`s persons Face.

Code as Follow

''''
Capture multiple Faces from multiple users to be stored on a DataBase (dataset directory)
   ==> Faces will be stored on a directory: dataset/ (if does not exist, pls create one)
   ==> Each face will have a unique numeric integer ID as 1, 2, 3, etc                       


Developed by Ujwal,Yash,adinath,sanket under guidance of prianka ma`am group no. 49

'''

import cv2
import os

cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video width
cam.set(4, 480) # set video height

face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

# For each person, enter one numeric face id
face_id = input('\n enter user id end press <return> ==>  ')

print("\n [INFO] Initializing face capture. Look the camera and wait ...")
# Initialize individual sampling face count
count = 0

while(True):

    ret, img = cam.read()
    img = cv2.flip(img, 1) # flip video image vertically
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_detector.detectMultiScale(gray, 1.3, 5)

    for (x,y,w,h) in faces:

        cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)     
        count += 1

        # Save the captured image into the datasets folder
        cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])

        cv2.imshow('image', img)

    k = cv2.waitKey(100) & 0xff # Press 'ESC' for exiting video
    if k == 27:
        break
    elif count >= 30: # Take 30 face sample and stop video
         break

# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()


2.Face Training:- Trains the already taken 30 images and convert into gray scale. For easy detection

i have applied the code for a Reduction of Noice in image.

Code as Follow:

''''
Training Multiple Faces stored on a DataBase:
   ==> Each face should have a unique numeric integer ID as 1, 2, 3, etc                       
   ==> LBPH computed model will be saved on trainer/ directory. (if it does not exist, pls create one)
   ==> for using PIL, install pillow library with "pip install pillow"



Developed by Ujwal,Yash,adinath,sanket under gidence of prianka ma`am group no.49

'''

import cv2
import numpy as np
from PIL import Image
import os

# Path for face image database
path = 'dataset'

recognizer = cv2.face.LBPHFaceRecognizer_create()
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml");

# function to get the images and label data
def getImagesAndLabels(path):

    imagePaths = [os.path.join(path,f) for f in os.listdir(path)]     
    faceSamples=[]
    ids = []

    for imagePath in imagePaths:

        PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale
        img_numpy = np.array(PIL_img,'uint8')

        id = int(os.path.split(imagePath)[-1].split(".")[1])
        faces = detector.detectMultiScale(img_numpy)

        for (x,y,w,h) in faces:
            faceSamples.append(img_numpy[y:y+h,x:x+w])
            ids.append(id)

    return faceSamples,ids

print ("\n [INFO] Training faces. It will take a few seconds. Wait ...")
faces,ids = getImagesAndLabels(path)
recognizer.train(faces, np.array(ids))

# Save the model into trainer/trainer.yml
recognizer.write('trainer/trainer.yml') # recognizer.save() worked on Mac, but not on Pi

# Print the numer of faces trained and end program
print("\n [INFO] {0} faces trained. Exiting Program".format(len(np.unique(ids))))


3.Face Recognition:- Recognise The face if and only if that`s person images are present in a datase.

otherwise it shows unkwon person.If the person is recognisable by system then it

shows the match in persentage.

Code as Follow:

''''
Real Time Face Recogition
   ==> Each face stored on dataset/ dir, should have a unique numeric integer ID as 1, 2, 3, etc                       
   ==> LBPH computed model (trained faces) should be on trainer/ dir


Developed by Ujwal,Yash,adinath,sanket under gidence of prianka ma`am Group no.49

'''

import cv2
import numpy as np
import os 

recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read('trainer/trainer.yml')
cascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath);

font = cv2.FONT_HERSHEY_SIMPLEX

#iniciate id counter
id = 0

# names related to ids: example ==> Marcelo: id=1,  etc
names = ['None','Ujwal','Yash','Adinath','Sanket']

# Initialize and start realtime video capture
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video widht
cam.set(4, 480) # set video height

# Define min window size to be recognized as a face
minW = 0.1*cam.get(3)
minH = 0.1*cam.get(4)

while True:

    ret, img =cam.read()
    # img = cv2.flip(img, -1) # Flip vertically

    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    faces = faceCascade.detectMultiScale( 
        gray,
        scaleFactor = 1.2,
        minNeighbors = 5,
        minSize = (int(minW), int(minH)),
       )

    for(x,y,w,h) in faces:

        cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)

        id, confidence = recognizer.predict(gray[y:y+h,x:x+w])

        # Check if confidence is less them 100 ==> "0" is perfect match 
        if (confidence < 100):
            id = names[id]
            confidence = "  {0}%".format(round(100 - confidence))
        else:
            id = "unknown"
            confidence = "  {0}%".format(round(100 - confidence))
        
        cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2)
        cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1)  
    
    cv2.imshow('camera',img) 

    k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video
    if k == 27:
        break

# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()


More Details: Face Detection And Recognise

Submitted By