Real Estate Price Prediction

Last Updated on May 3, 2021

About

People looking to buy a new home tend to be more conservative with their budgets and market strategies. The existing system involves calculation of house prices without the necessary prediction about future market trends and price increase. The goal of this project is to predict the efficient house pricing for real estate customers with respect to their budgets and priorities. By analyzing previous market trends and price ranges, and also upcoming developments future prices will be predicted. The functioning of this project involves a website which accepts customer’s specifications and then combines the application of multiple linear regression algorithm of data mining. This application will help customers to invest in an estate without approaching an agent. It also decreases the risk involved in the transaction.


Housing prices are an important reflection of the economy, and housing price ranges are of great interest for both buyers and sellers. In this project. house prices will be predicted given explanatory variables that cover many aspects of residential houses. Thus, there is a need to predict the efficient house pricing for real estate customers with respect to their budgets and priorities. This project uses random forest algorithm to predict prices by analyzing current house prices, thereby forecasting the future prices according to the user’s requirements. The goal of this project is to create a regression model that are able to accurately estimate the price of the house given the features.

 



More Details: Real Estate Price Prediction

Submitted By


Share with someone who needs it

Task-Manager Backend Rest-Api(Node.Js)

Last Updated on May 3, 2021

About

Technology Used:

  • Node.js
  • Express js
  • MongoDB


Library Used:

  • jwt (json web token)
  • bcrypt
  • validator
  • sharp
  • multer


General Description:


  • In this project user can create its own tasks.
  • User can manage their tasks according to their preferences.
  • User can edit or delete the particular task and also user can track the status of task (i.e completed or pending).


Usage:


  • In order to use application you should register in an application. You can make it by calling Sign Up API.
  • The password is stored in Encrypted format in database.
  • In Login API we generate an access token using jwt.
  • In order to call create,update,delete API'S we have to pass an access token in header section of the request.
  • If we don't pass an access token then user will got a message 'Please Authenticate'.


Database Structure:


Task:

description : String,

completed :Boolean,

owner : ObjectId,

timestamps :true


User:

name : String,

email :String,

password :String

age : Number,

tokens:[{

token:type:String

}],

avtar : Buffer


API'S:


User:

URL TYPE Description

  • /users/login POST login
  • /users/ POST SignUp
  • /users/me GET Profile
  • /users/logout POST logout
  • /users/logoutall POST logout from all devices
  • /users/me DELETE delete user
  • /users/me PUT Updating user
  • /upload POST Uploading avtar
  • /users/me/avtar DELETE delete user avtar



Task:

URL TYPE Description

  • /task POST Create Task
  • /task GET Getting Task
  • /task/:id PUT Updating Task
  • /task/:id DELETE Deleting Task
  • /users/logoutall POST logout from all devices
  • /users/me DELETE delete user


More Details: Task-Manager Backend REST-API(Node.js)

Submitted By


Smart Bag Tracker

Last Updated on May 3, 2021

About

Smart bag is an application-specific design that can be useful for almost everyone in the

society. The loss or mishandling of luggage in airports is increasing nowadays,

tremendously raising its associated costs. It is expected that the constant monitoring

detects possible errors in a timely manner, allowing a proactive attitude when correcting

this kind of situations. There are several devices in the market but all have some

problems such as power consumption, location, portability, etc. The current research

provides a novel idea to track the luggage in real time with the help of a microcontroller

system, which is wearable and handy. Using wireless communication techniques, the

proposed system has been designed.


The system consists of GPS module which will fetch the current latitude and longitude and

using advanced Wi-Fi enabled microcontroller which will connect to the 4G


hotspot internet and transmit the current location of the bag to the central server. Using an

Android App the user can view the current position of the bag in google maps.


There are a lot of applications to the luggage but all of them are not controlled from the luggage, instead the commands are sent from the mobile phone to the luggage via Machine to Machine communication. The mobile phone has a pre-installed application software with a pre-installed set of instructions. They wait for the user to send the commands. This can either be for tracking its location.




More Details: Smart Bag Tracker

Submitted By


Identifying Water Sources For Smallholder Farmers With Agri

Last Updated on May 3, 2021

About

CIAT and The Zamorano Pan-American Agricultural School, in coordination with the United States Agency for International Development (USAID)/Honduras, began in March the validation and dissemination process of the geographic information system (GIS) tool AGRI (Water for Irrigation, by its Spanish acronym).

What is AGRI?

AGRI was developed in ArcGIS 10.1® for western Honduras with the aim of providing support for decision making in identifying suitable water sources for small drip irrigation systems. These systems cover areas of up to 10 hectares and are part of the U.S. government initiative Feed the Future in six departments of western Honduras (Santa Bárbara, Copán, Ocotepeque, Lempira, Intibucá, and La Paz).

AGRI identifies surface-water sources and sites suitable for rainwater harvesting for agriculture. In addition, AGRI maps the best routes for installing water pipes between the first parcel of the irrigation system and the identified water source. The tool is complemented by deforestation analyses of upstream areas, as an indicator of watershed conservation status.

How was AGRI developed?

Developing this tool required the implementation of a complex framework of spatial analysis that included correcting the terrain Digital Elevation Model (DEM), using weather information derived from remote sensors, hydrological analysis such as estimation of runoff and water balance, and modeling the path with lower costs or fewer difficulties in installing pipes across the landscape. Additionally, it was necessary to do digital soil mapping for some variables.

What does AGRI offer to its users?

AGRI was developed based on the following needs identified by USAID-Honduras in relation to the implementation of small irrigation systems in the country:

  1. To find the closest water source that permits transportation of the water by gravity to parcels.
  2. To search for “permanent and sufficient” water sources to establish water outlets.
  3. To find suitable sites for building reservoirs for the harvest of runoff water.
  4. To take into account the protection of water sources for human consumption and other protected zones and avoid possible conflicts on water use.
  5. The tool needs to be easy to use for technicians and agronomists.
  6. The tool should use information that is readily available in the country.

This application was developed at the request of USAID-Honduras and it responds to the implementation needs of its programs. This implementation was led by the Decision and Policy Analysis (DAPA) area of CIAT with the participation of the soil area, which contributed with the digital soil mapping for the project. Likewise, Zamorano University supported the field validation and the analysis of the legal context related to water use, which serves as a basis for the application of this tool.

More Details: Identifying water sources for smallholder farmers with AGRI

Submitted By


Worklet Allocation System

Last Updated on May 3, 2021

About

Innovation and Technology have granted numerous opportunities for people around the world who are in need of employment. It has created new marketplaces that offer stable economic benefits which were never thought of before. However, in this modern society, with a plethora of media & mass communication approaches, people offering domestic services still struggle to find jobs on their own and most of them end up joining agencies which take away a significant portion of their income. Services such as home repairs, beauty, and cleaning can be provided at much cheaper rates if the workers are approached directly without any inefficient middlemen.

A web-based home services marketplace is a more convenient and efficient way for people to locate, hire, and provide feedback about nearby domestic employees who are willing to provide their services as per the customer’s requirement. Our proposed system aims to hire skilled workers and connect them to the right clients based on locational proximity. India has a huge demand for these kinds of services and a platform such as this can be used to cater to them.


The aim of this project: to provide a worklet-servicing application, capable of managing its workers employed in a variety of fields as well as its clientele who enlist the services on a day-to-day basis. Our Algorithm aims to match the client to the best service professionals as per their need that is closest to their location in a shorter time period with the help of effective allocation algorithms such as the Shortest Job First and the Banker’s Algorithm. 


Project is built in NodeJS,MongoDB as well as presently algorithm runs in python which needs to called as an API in future enhancements


In this application there are three interfaces Admin,Customer,Client

Client is one who needs the services on a daily basis, Customer is one who needs the services for some period of time in a day.

Admin has the access to the workers data and live tracking of their location where they are working.

To decide for how many hours the service is required we made a questionaire through which a rough estimation of time can be done to allocate the workers.


Future enhancements of the project are -

  • We intend to add the feature where a worker can give their attendance for the day right from within the mobile application and possibly add a chat feature in order to let them communicate with the consumer. 
More Details: Worklet Allocation System

Submitted By


Face Detection And Recognise

Last Updated on May 3, 2021

About

Project is Build In three parts as follows:-



  1. Face Dataset:- Recognise the face of a person and takes 30 Images of that`s persons Face.

Code as Follow

''''
Capture multiple Faces from multiple users to be stored on a DataBase (dataset directory)
   ==> Faces will be stored on a directory: dataset/ (if does not exist, pls create one)
   ==> Each face will have a unique numeric integer ID as 1, 2, 3, etc                       


Developed by Ujwal,Yash,adinath,sanket under guidance of prianka ma`am group no. 49

'''

import cv2
import os

cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video width
cam.set(4, 480) # set video height

face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')

# For each person, enter one numeric face id
face_id = input('\n enter user id end press <return> ==>  ')

print("\n [INFO] Initializing face capture. Look the camera and wait ...")
# Initialize individual sampling face count
count = 0

while(True):

    ret, img = cam.read()
    img = cv2.flip(img, 1) # flip video image vertically
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_detector.detectMultiScale(gray, 1.3, 5)

    for (x,y,w,h) in faces:

        cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)     
        count += 1

        # Save the captured image into the datasets folder
        cv2.imwrite("dataset/User." + str(face_id) + '.' + str(count) + ".jpg", gray[y:y+h,x:x+w])

        cv2.imshow('image', img)

    k = cv2.waitKey(100) & 0xff # Press 'ESC' for exiting video
    if k == 27:
        break
    elif count >= 30: # Take 30 face sample and stop video
         break

# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()


2.Face Training:- Trains the already taken 30 images and convert into gray scale. For easy detection

i have applied the code for a Reduction of Noice in image.

Code as Follow:

''''
Training Multiple Faces stored on a DataBase:
   ==> Each face should have a unique numeric integer ID as 1, 2, 3, etc                       
   ==> LBPH computed model will be saved on trainer/ directory. (if it does not exist, pls create one)
   ==> for using PIL, install pillow library with "pip install pillow"



Developed by Ujwal,Yash,adinath,sanket under gidence of prianka ma`am group no.49

'''

import cv2
import numpy as np
from PIL import Image
import os

# Path for face image database
path = 'dataset'

recognizer = cv2.face.LBPHFaceRecognizer_create()
detector = cv2.CascadeClassifier("haarcascade_frontalface_default.xml");

# function to get the images and label data
def getImagesAndLabels(path):

    imagePaths = [os.path.join(path,f) for f in os.listdir(path)]     
    faceSamples=[]
    ids = []

    for imagePath in imagePaths:

        PIL_img = Image.open(imagePath).convert('L') # convert it to grayscale
        img_numpy = np.array(PIL_img,'uint8')

        id = int(os.path.split(imagePath)[-1].split(".")[1])
        faces = detector.detectMultiScale(img_numpy)

        for (x,y,w,h) in faces:
            faceSamples.append(img_numpy[y:y+h,x:x+w])
            ids.append(id)

    return faceSamples,ids

print ("\n [INFO] Training faces. It will take a few seconds. Wait ...")
faces,ids = getImagesAndLabels(path)
recognizer.train(faces, np.array(ids))

# Save the model into trainer/trainer.yml
recognizer.write('trainer/trainer.yml') # recognizer.save() worked on Mac, but not on Pi

# Print the numer of faces trained and end program
print("\n [INFO] {0} faces trained. Exiting Program".format(len(np.unique(ids))))


3.Face Recognition:- Recognise The face if and only if that`s person images are present in a datase.

otherwise it shows unkwon person.If the person is recognisable by system then it

shows the match in persentage.

Code as Follow:

''''
Real Time Face Recogition
   ==> Each face stored on dataset/ dir, should have a unique numeric integer ID as 1, 2, 3, etc                       
   ==> LBPH computed model (trained faces) should be on trainer/ dir


Developed by Ujwal,Yash,adinath,sanket under gidence of prianka ma`am Group no.49

'''

import cv2
import numpy as np
import os 

recognizer = cv2.face.LBPHFaceRecognizer_create()
recognizer.read('trainer/trainer.yml')
cascadePath = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascadePath);

font = cv2.FONT_HERSHEY_SIMPLEX

#iniciate id counter
id = 0

# names related to ids: example ==> Marcelo: id=1,  etc
names = ['None','Ujwal','Yash','Adinath','Sanket']

# Initialize and start realtime video capture
cam = cv2.VideoCapture(0)
cam.set(3, 640) # set video widht
cam.set(4, 480) # set video height

# Define min window size to be recognized as a face
minW = 0.1*cam.get(3)
minH = 0.1*cam.get(4)

while True:

    ret, img =cam.read()
    # img = cv2.flip(img, -1) # Flip vertically

    gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)

    faces = faceCascade.detectMultiScale( 
        gray,
        scaleFactor = 1.2,
        minNeighbors = 5,
        minSize = (int(minW), int(minH)),
       )

    for(x,y,w,h) in faces:

        cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2)

        id, confidence = recognizer.predict(gray[y:y+h,x:x+w])

        # Check if confidence is less them 100 ==> "0" is perfect match 
        if (confidence < 100):
            id = names[id]
            confidence = "  {0}%".format(round(100 - confidence))
        else:
            id = "unknown"
            confidence = "  {0}%".format(round(100 - confidence))
        
        cv2.putText(img, str(id), (x+5,y-5), font, 1, (255,255,255), 2)
        cv2.putText(img, str(confidence), (x+5,y+h-5), font, 1, (255,255,0), 1)  
    
    cv2.imshow('camera',img) 

    k = cv2.waitKey(10) & 0xff # Press 'ESC' for exiting video
    if k == 27:
        break

# Do a bit of cleanup
print("\n [INFO] Exiting Program and cleanup stuff")
cam.release()
cv2.destroyAllWindows()


More Details: Face Detection And Recognise

Submitted By