Voice Assistant (Python)

Last Updated on May 3, 2021

About

Voice Assistant helps you to work on your commands.

More Details: Voice assistant (Python)

Submitted By


Share with someone who needs it

Face Detection And Recognise

Last Updated on May 3, 2021

About

Project is Build In three parts as follows:-



  1. Face Dataset:- Recognise the face of a person and takes 30 Images of that`s persons Face.

Code as Follow

''''
Capture multiple Faces from multiple users to be stored on a DataBase (dataset directory)
   ==> Faces will be stored on a directory: dataset/ (if does not exist, pls create one)
   ==> Each face will have a unique numeric integer ID as 1, 2, 3, etc                       


Developed by Ujwal,Yash,adinath,sanket under guidance of prianka ma`am group no. 49

'''

import cv2
import os

cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video width
cam.set(4, 480) # set video height

face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

# For each person, enter one numeric face id
face_id = input('\n enter user id end press <return> ==>  ')

print("\n [INFO] Initializing face capture. Look the camera and wait ...")
# Initialize individual sampling face count
count = 0

while(True):

    ret, img = cam.read()
    img = cv2.flip(img, 1) # flip video image vertically
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_detector.detectMultiScale(gray, 1.3, 5)

    for (x,y,w,h) in faces:

        cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)     
        count += 1

        # Save the captured image into the datasets folder
        cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])

        cv2.imshow('image', img)

    k = cv2.waitKey(100) & 0xff # Press 'ESC' for exiting video
    if k == 27:
        break
    elif count >= 30: # Take 30 face sample and stop video
         break

# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()


2.Face Training:- Trains the already taken 30 images and convert into gray scale. For easy detection

i have applied the code for a Reduction of Noice in image.

Code as Follow:

''''
Training Multiple Faces stored on a DataBase:
   ==> Each face should have a unique numeric integer ID as 1, 2, 3, etc                       
   ==> LBPH computed model will be saved on trainer/ directory. (if it does not exist, pls create one)
   ==> for using PIL, install pillow library with "pip install pillow"



Developed by Ujwal,Yash,adinath,sanket under gidence of prianka ma`am group no.49

'''

import cv2
import numpy as np
from PIL import Image
import os

# Path for face image database
path = 'dataset'

recognizer = cv2.face.LBPHFaceRecognizer_create()
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml");

# function to get the images and label data
def getImagesAndLabels(path):

    imagePaths = [os.path.join(path,f) for f in os.listdir(path)]     
    faceSamples=[]
    ids = []

    for imagePath in imagePaths:

        PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale
        img_numpy = np.array(PIL_img,'uint8')

        id = int(os.path.split(imagePath)[-1].split(".")[1])
        faces = detector.detectMultiScale(img_numpy)

        for (x,y,w,h) in faces:
            faceSamples.append(img_numpy[y:y+h,x:x+w])
            ids.append(id)

    return faceSamples,ids

print ("\n [INFO] Training faces. It will take a few seconds. Wait ...")
faces,ids = getImagesAndLabels(path)
recognizer.train(faces, np.array(ids))

# Save the model into trainer/trainer.yml
recognizer.write('trainer/trainer.yml') # recognizer.save() worked on Mac, but not on Pi

# Print the numer of faces trained and end program
print("\n [INFO] {0} faces trained. Exiting Program".format(len(np.unique(ids))))


3.Face Recognition:- Recognise The face if and only if that`s person images are present in a datase.

otherwise it shows unkwon person.If the person is recognisable by system then it

shows the match in persentage.

Code as Follow:

''''
Real Time Face Recogition
   ==> Each face stored on dataset/ dir, should have a unique numeric integer ID as 1, 2, 3, etc                       
   ==> LBPH computed model (trained faces) should be on trainer/ dir


Developed by Ujwal,Yash,adinath,sanket under gidence of prianka ma`am Group no.49

'''

import cv2
import numpy as np
import os 

recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read('trainer/trainer.yml')
cascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath);

font = cv2.FONT_HERSHEY_SIMPLEX

#iniciate id counter
id = 0

# names related to ids: example ==> Marcelo: id=1,  etc
names = ['None','Ujwal','Yash','Adinath','Sanket']

# Initialize and start realtime video capture
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video widht
cam.set(4, 480) # set video height

# Define min window size to be recognized as a face
minW = 0.1*cam.get(3)
minH = 0.1*cam.get(4)

while True:

    ret, img =cam.read()
    # img = cv2.flip(img, -1) # Flip vertically

    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    faces = faceCascade.detectMultiScale( 
        gray,
        scaleFactor = 1.2,
        minNeighbors = 5,
        minSize = (int(minW), int(minH)),
       )

    for(x,y,w,h) in faces:

        cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)

        id, confidence = recognizer.predict(gray[y:y+h,x:x+w])

        # Check if confidence is less them 100 ==> "0" is perfect match 
        if (confidence < 100):
            id = names[id]
            confidence = "  {0}%".format(round(100 - confidence))
        else:
            id = "unknown"
            confidence = "  {0}%".format(round(100 - confidence))
        
        cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2)
        cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1)  
    
    cv2.imshow('camera',img) 

    k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video
    if k == 27:
        break

# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()


More Details: Face Detection And Recognise

Submitted By


Credit Card Detection

Last Updated on May 3, 2021

About

models trained to label anonymized credit card transactions as fraudulent or genuine. Dataset from Kaggle. In this project I build machine learning models to identify fraud in credit card transactions. I also make several data visualizations to reveal patterns and structure in the data.

The dataset, hosted on Kaggle, includes credit card transactions made by cardholders. The data contains 7983 transactions that occurred over of which 17 (0.21%) are fraudulent. Each transaction has 30 features, all of which are numerical. The features V1, V2, ..., V28 are the result of a PCA transformation. To protect confidentiality, background information on these features is not available. The Time feature contains the time elapsed since the first transaction, and the Amount feature contains the transaction amount. The response variable, Class, is 1 in the case of fraud, and 0 otherwise. Project Introduction

The approaches for the project are :

Randomly split the dataset into train, validation, and test set.
Do  feature engineering.
Predict and evaluate with validation set.
Train on  train set then predict and evaluate with validation set.
Try other different models.
Compare the difference between the predictions and choose the best model.
Predict on test set to report final result.

Data Description

I was able to accurately identify fraudulent transactions using a LogisticRegression model. Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. Feature 'Class' is the target variable with value 1 in case of fraud and 0 otherwise.To improve a particular model, I optimized hyperparameters via a grid search with 3-fold cross-validation

More Details: CREDIT CARD DETECTION

Submitted By


Cert-It!

Last Updated on May 3, 2021

About

Cert It! is a web based and android based app that aims to provide and generate certificates over a range of many templates that can be chosen by the user. The user can enter his or her details through a .csv or .xlsx file (containing data in a predefined format having multiple users) or they can list out their own requirements to generate a single certificate.


Problem Statement

There are numerous companies and organizations out there that are providing certificates to their participants / winners. Sometimes , even educational organizations have to provide a load of generated certificates to their people. This process gets pretty hectic since its a very repetitive task.


Solution

Cert It! aims to solve this problem. We are providing an all round elucidation into this issue by providing an idea that automates these tasks & at the same time keep it user friendly. Through this application we want to provide our users with

  • Sample templates of our own on which they can choose and select the best possible fit for their organization and participants.
  • Allow the user to upload their own template and generate certificates.
  • Allow the user to upload a snapshot of the handwritten data in a specified format through which our app will recognize the necessary details and map it out to generate a certificate.


More Details: Cert-It!

Submitted By


Voice Of The Day

Last Updated on May 3, 2021

About

Inspiration

The format used to work well on the radio, so we wanted to recreate those memories on Alexa.

What it does

Players can listen to up to three voice clips of well-known people and/or celebrities talking every day, as well as view a blurred image of the celebrity. After each clip, Alexa will ask you who you think is talking, and you must try to answer correctly. This will earn you a score for the monthly and all-time leader boards. The player can ask Alexa for hints, or to skip that voice clip to move onto the next one. Users scores are awarded depending on how many incorrect answers they gave for that voice and whether they used a hint or not. Users can also ask to hear yesterday’s answers, in case they couldn’t get the answers on that day.

How I built it

To create the structure of the skill, we used the Alexa Skills Kit CLI.

We used Amazons S3 storage for all our in-game assets such as the audio clips and images.

We used the Alexa Presentation Language to create the visual interface for the skill.

We used the Amazon GameOn SDK to create monthly and all-time leader boards for all users to enter without any sign up.

Every day, free users will be given the ‘easy’ clip to answer. The set of clips each day will be selected dependant on the day of the year. Users who have purchased premium gain access to the ‘medium’ and ‘hard’ clips every day, as well as being able to ask for hints for the voices, skip a voice if they are stuck, and enter scores onto the leader boards.

Accomplishments that I’m proud of

As well as creating a high-quality voice-only experience, we developed the skill to be very APL focused, which we are very proud of. The visual assets we used for the project were very high quality and we were able to dynamically display the appropriate data for each screen within the skill. The content loaded depends on who is talking, as well as the difficulty of the voice that the user is answering. APL also allowed us to blur and unblur the celebrity images, as opposed to having separate blurred and unblurred images for each person.

We were also very pleased with how we implemented the GameOn SDK SDK into the skill. When the user submits a score, they have a random avatar created for them, and their score will be submitted under this avatar. This avoids any sign up to use the leader boards, allowing all users to use it easily.

GameOn SDK also allows us to create monthly and all-time competitions/leader boards which all users are automatically entered.

What I learned

I have learnt how to develop APL as well as better practices for structuring it more efficiently. For example, there are many APL views in the project, all of which are almost identical, what I have learnt that would be more effective in future projects would be to condense these down into one primary view that I would use for each screen and just use the appropriate data.

I have also been able to hone prompts to the user for upsells and showing the leader boards. Testing has shown that constant prompts on each play for these things can become tedious to the user, and so we have reduced the frequency of these for a much better user experience.

More Details: Voice of the Day

Submitted By


Password Checker

Last Updated on May 3, 2021

About

This can be the most secure way for you to check if your password has ever been hacked. This is a password checker which checks whether this password has been used before or not. and if it has been used then the number of times it has been found. It makes it easy for you to understand that your password is strong enough to keep or is it too light. Its working is pretty simple, in my terminal i write the python file with my code checkmypass.py followed by the password to check if its ever been hacked , its gonna check as many passwords as we list in the terminal. I have used passwords API (pawned password) and SHAH1 (algorithm) to hash the given password into some complex output which is hard to hack also only the first five characters of hash version of password has been used for super privacy so that the real one is safe. The concept of k-anonymity is used  it provides privacy protection by guaranteeing that each record relates to at least k individuals even if the released records are directly linked (or matched) to external information. I have added this on my Github repository.

password-checker/checkmypass.py at main · THC1111/password-checker (github.com)

THIS CAN BE REALLY EFFECTIVE FOR SOME PERSONEL USE.

More Details: PASSWORD CHECKER

Submitted By