Institute Database

Last Updated on May 3, 2021



This program includes the problem of an education institute which wants to store its data. The program is the best example of Inheritance in C++ language.


Submitted By

Share with someone who needs it

Human Computer Interaction Using Iris,Head And Eye Detection

Last Updated on May 3, 2021


HCI stands for the human computer interaction which means the interaction between the humans and the computer.

We need to improve it because then only it would improve the user interaction and usability. A richer design would encourage users and a poor design would keep the users at bay.

We also need to design for different categories of people having different age,color,gender etc. We need to make them accessible to older people.

It is our moral responsibility to make it accessible to disabled people.

So this project tracks our head ,eye and iris to detect the eye movement by using the viola Jones algorithm.But this algorithm does not work with our masks on as it calculated the facial features to calculate the distance.

It uses the eucledian distance to calculate the distance between the previous frame and the next frame and actually plots a graph.

It also uses the formula theta equals tan inverse of b/a to calculate the deviation.

Here we are using ANN algorithm because ANN can work with incomplete data. Here we are using constructive or generative neural networks which means it starts capturing our individual images at the beginning to create our individual patterns and track the eye.

Here we actually build the neural network and train it to predict

Finally we convert it to mouse direction and clicks and double clicks on icons and the virtual keyboard.

As a contributing or moral individuals it is our duty to make devices compatible with all age groups and differently abled persons.

More Details: Human Computer Interaction using iris,head and eye detection

Submitted By

Comcast Telecom Consumer Complaints

Last Updated on May 3, 2021


Comcast is an American global telecommunication company. The firm has been providing terrible customer service. They continue to fall short despite repeated promises to improve. Only last month (October 2016) the authority fined them a $2.3 million, after receiving over 1000 consumer complaints.

The existing database will serve as a repository of public customer complaints filed against Comcast.

It will help to pin down what is wrong with Comcast's customer service.

Data Dictionary

  • Ticket #: Ticket number assigned to each complaint
  • Customer Complaint: Description of complaint
  • Date: Date of complaint
  • Time: Time of complaint
  • Received Via: Mode of communication of the complaint
  • City: Customer city
  • State: Customer state
  • Zipcode: Customer zip
  • Status: Status of complaint
  • Filing on behalf of someone

Analysis Task

To perform these tasks, you can use any of the different Python libraries such as NumPy, SciPy, Pandas, scikit-learn, matplotlib, and BeautifulSoup.

- Import data into Python environment.

- Provide the trend chart for the number of complaints at monthly and daily granularity levels.

- Provide a table with the frequency of complaint types.

  • Which complaint types are maximum i.e., around internet, network issues, or across any other domains.

- Create a new categorical variable with value as Open and Closed. Open & Pending is to be categorized as Open and Closed & Solved is to be categorized as Closed.

- Provide state wise status of complaints in a stacked bar chart. Use the categorized variable from Q3. Provide insights on:

  • Which state has the maximum complaints
  • Which state has the highest percentage of unresolved complaints

- Provide the percentage of complaints resolved till date, which were received through the Internet and customer care calls.

The analysis results to be provided with insights wherever applicable.

More Details: Comcast Telecom Consumer Complaints

Submitted By

Cifar-10 Image Classification Using Tensorflow

Last Updated on May 3, 2021


The CIFAR-10 dataset consists of 60000 32x32 color images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images.

The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.

The classes are completely mutually exclusive. There is no overlap between automobiles and trucks. "Automobile" includes sedans, SUVs, things of that sort. "Truck" includes only big trucks. Neither includes pickup trucks.

The CIFAR-10 dataset  is a collection of images that are commonly used to train machine learning and computer vision algorithms. It is one of the most widely used datasets for machine learning research. The CIFAR-10 dataset contains 60,000 32x32 color images in 10 different classes. The 10 different classes represent airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. There are 6,000 images of each class.

Computer algorithms for recognizing objects in photos often learn by example. CIFAR-10 is a set of images that can be used to teach a computer how to recognize objects. Since the images in CIFAR-10 are low-resolution (32x32), this dataset can allow researchers to quickly try different algorithms to see what works. Various kinds of covolutional neural networks tend to be the best at recognizing the images in CIFAR-10.

More Details: CIFAR-10 image classification using Tensorflow

Submitted By

Smart Health Monitoring App

Last Updated on May 3, 2021


The proposed solution will be an online mobile based application. This app will contain information regarding pre and post maternal session. The app will help a pregnant lady to know about pregnancy milestone and when to worry and when to not. According to this app, user needs to register by entering name, age, mobile number and preferred language. The app will be user friendly making it multi-lingual and audio-video guide to help people who have impaired hearing or sight keeping in mind women who reside in rural areas and one deprived of primary education. The app will encompass two sections pre-natal and post- natal.

           In case of emergency i.e. when the water breaks (indication) there will be a provision to send emergency message (notification) that will be sent to FCM (Firebase Cloud Messaging), it then at first tries to access the GPS settings in cell, in case the GPS isn’t on, Geolocation API will be used. Using Wi-Fi nodes that mobile device can detect, Internet, Google’s datasets, nearby towers, a precise location is generated and sent via Geocoding to FCM, that in turn generates push notifications, and the tokens will be sent to registered user’s, hospitals, nearby doctors, etc. and necessary actions will be implemented, so that timely            help will be provided

More Details: Smart Health Monitoring App

Submitted By

Face Detection And Recognise

Last Updated on May 3, 2021


Project is Build In three parts as follows:-

  1. Face Dataset:- Recognise the face of a person and takes 30 Images of that`s persons Face.

Code as Follow

Capture multiple Faces from multiple users to be stored on a DataBase (dataset directory)
   ==> Faces will be stored on a directory: dataset/ (if does not exist, pls create one)
   ==> Each face will have a unique numeric integer ID as 1, 2, 3, etc                       

Developed by Ujwal,Yash,adinath,sanket under guidance of prianka ma`am group no. 49


import cv2
import os

cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video width
cam.set(4, 480) # set video height

face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

# For each person, enter one numeric face id
face_id = input('\n enter user id end press <return> ==>  ')

print("\n [INFO] Initializing face capture. Look the camera and wait ...")
# Initialize individual sampling face count
count = 0


    ret, img =
    img = cv2.flip(img, 1) # flip video image vertically
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_detector.detectMultiScale(gray, 1.3, 5)

    for (x,y,w,h) in faces:

        cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)     
        count += 1

        # Save the captured image into the datasets folder
        cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])

        cv2.imshow('image', img)

    k = cv2.waitKey(100) & 0xff # Press 'ESC' for exiting video
    if k == 27:
    elif count >= 30: # Take 30 face sample and stop video

# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")

2.Face Training:- Trains the already taken 30 images and convert into gray scale. For easy detection

i have applied the code for a Reduction of Noice in image.

Code as Follow:

Training Multiple Faces stored on a DataBase:
   ==> Each face should have a unique numeric integer ID as 1, 2, 3, etc                       
   ==> LBPH computed model will be saved on trainer/ directory. (if it does not exist, pls create one)
   ==> for using PIL, install pillow library with "pip install pillow"

Developed by Ujwal,Yash,adinath,sanket under gidence of prianka ma`am group no.49


import cv2
import numpy as np
from PIL import Image
import os

# Path for face image database
path = 'dataset'

recognizer = cv2.face.LBPHFaceRecognizer_create()
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml");

# function to get the images and label data
def getImagesAndLabels(path):

    imagePaths = [os.path.join(path,f) for f in os.listdir(path)]     
    ids = []

    for imagePath in imagePaths:

        PIL_img ='L') # convert it to grayscale
        img_numpy = np.array(PIL_img,'uint8')

        id = int(os.path.split(imagePath)[-1].split(".")[1])
        faces = detector.detectMultiScale(img_numpy)

        for (x,y,w,h) in faces:

    return faceSamples,ids

print ("\n [INFO] Training faces. It will take a few seconds. Wait ...")
faces,ids = getImagesAndLabels(path)
recognizer.train(faces, np.array(ids))

# Save the model into trainer/trainer.yml
recognizer.write('trainer/trainer.yml') # worked on Mac, but not on Pi

# Print the numer of faces trained and end program
print("\n [INFO] {0} faces trained. Exiting Program".format(len(np.unique(ids))))

3.Face Recognition:- Recognise The face if and only if that`s person images are present in a datase.

otherwise it shows unkwon person.If the person is recognisable by system then it

shows the match in persentage.

Code as Follow:

Real Time Face Recogition
   ==> Each face stored on dataset/ dir, should have a unique numeric integer ID as 1, 2, 3, etc                       
   ==> LBPH computed model (trained faces) should be on trainer/ dir

Developed by Ujwal,Yash,adinath,sanket under gidence of prianka ma`am Group no.49


import cv2
import numpy as np
import os 

recognizer = cv2.face.LBPHFaceRecognizer_create()'trainer/trainer.yml')
cascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath);


#iniciate id counter
id = 0

# names related to ids: example ==> Marcelo: id=1,  etc
names = ['None','Ujwal','Yash','Adinath','Sanket']

# Initialize and start realtime video capture
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video widht
cam.set(4, 480) # set video height

# Define min window size to be recognized as a face
minW = 0.1*cam.get(3)
minH = 0.1*cam.get(4)

while True:

    ret, img
    # img = cv2.flip(img, -1) # Flip vertically

    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    faces = faceCascade.detectMultiScale( 
        scaleFactor = 1.2,
        minNeighbors = 5,
        minSize = (int(minW), int(minH)),

    for(x,y,w,h) in faces:

        cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)

        id, confidence = recognizer.predict(gray[y:y+h,x:x+w])

        # Check if confidence is less them 100 ==> "0" is perfect match 
        if (confidence < 100):
            id = names[id]
            confidence = "  {0}%".format(round(100 - confidence))
            id = "unknown"
            confidence = "  {0}%".format(round(100 - confidence))
        cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2)
        cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1)  

    k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video
    if k == 27:

# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")

More Details: Face Detection And Recognise

Submitted By