Sample Api

Last Updated on May 3, 2021

About

This API made as an assignment for an internship opportunity, this API can be used to add advisors by admin.

Users can register themselves and log in to book advisors by mentioning the dates. Finally, they can also look at the available advisors and the booked ones too. As this is an API, there is no front-end developed. I am attaching the postman collection of this API.

More Details: Sample API

Submitted By


Share with someone who needs it

Task-Manager Backend Rest-Api(Node.Js)

Last Updated on May 3, 2021

About

Technology Used:

  • Node.js
  • Express js
  • MongoDB


Library Used:

  • jwt (json web token)
  • bcrypt
  • validator
  • sharp
  • multer


General Description:


  • In this project user can create its own tasks.
  • User can manage their tasks according to their preferences.
  • User can edit or delete the particular task and also user can track the status of task (i.e completed or pending).


Usage:


  • In order to use application you should register in an application. You can make it by calling Sign Up API.
  • The password is stored in Encrypted format in database.
  • In Login API we generate an access token using jwt.
  • In order to call create,update,delete API'S we have to pass an access token in header section of the request.
  • If we don't pass an access token then user will got a message 'Please Authenticate'.


Database Structure:


Task:

description : String,

completed :Boolean,

owner : ObjectId,

timestamps :true


User:

name : String,

email :String,

password :String

age : Number,

tokens:[{

token:type:String

}],

avtar : Buffer


API'S:


User:

URL TYPE Description

  • /users/login POST login
  • /users/ POST SignUp
  • /users/me GET Profile
  • /users/logout POST logout
  • /users/logoutall POST logout from all devices
  • /users/me DELETE delete user
  • /users/me PUT Updating user
  • /upload POST Uploading avtar
  • /users/me/avtar DELETE delete user avtar



Task:

URL TYPE Description

  • /task POST Create Task
  • /task GET Getting Task
  • /task/:id PUT Updating Task
  • /task/:id DELETE Deleting Task
  • /users/logoutall POST logout from all devices
  • /users/me DELETE delete user


More Details: Task-Manager Backend REST-API(Node.js)

Submitted By


Book Recommendation System

Last Updated on May 3, 2021

About

Book Recommendation System

Book recommendation is created and deployed in this approach of work, which helps in recommending books. Recommendation achieved by the users feedbacks and rating, this is the online which analyse the ratings, comments and reviews of user, negative positive nature of comments using opinion mining. User searching for the interested book will be displayed in top list and also can read feedback given by people about the book or any searched items. Whenever user search for any book from the large data available, he gets confused from the number of displayed item, which one to choose. In that case recommendation helps and displays on the interested items. This is the trustworthy approach, which is used in this project where selection is based on the dataset.

Clustering

This project used clustering as the central idea. A clustering approach is used. Clustering is based on similarity where similar elements are kept in a single group. Likewise similar element, the irrelevant elements are also reside in a group, which is another group, based on similarity value or maximum size of cluster. The clustering approach which is used in our work is K-mean clustering for grouping of similar users. It is the unsupervised and simplest learning algorithm, which simplifies mining work by grouping similar elements forming cluster. This is done using a parameter called K-centroids. Distance between each element is calculated for checking the similarity and forming a single cluster to reside the similar elements, after comparing with K-centroid parameter.

In this project, 6 clusters were made.

The project is made with 2 separate datsets in .csv format taken from Kaggle.

  1. Books dataset
  2. Ratings

This project is GUI based. The output page has 2 options:

  1. Rate books
  2. Recommend books

The user can chose either according to themselves.

Rate books

In this option, the user can rate books.

Recommend books

In this option the books are recommended to the user, according to their previous readings.

More Details: Book Recommendation System

Submitted By


Project - Mercedes-Benz Greener Manufacturing

Last Updated on May 3, 2021

About

DESCRIPTION

Reduce the time a Mercedes-Benz spends on the test bench.

Problem Statement Scenario:

Since the first automobile, the Benz Patent Motor Car in 1886, Mercedes-Benz has stood for important automotive innovations. These include the passenger safety cell with the crumple zone, the airbag, and intelligent assistance systems. Mercedes-Benz applies for nearly 2000 patents per year, making the brand the European leader among premium carmakers. Mercedes-Benz cars are leaders in the premium car industry. With a huge selection of features and options, customers can choose the customized Mercedes-Benz of their dreams.

To ensure the safety and reliability of every unique car configuration before they hit the road, Daimler’s engineers have developed a robust testing system. As one of the world’s biggest manufacturers of premium cars, safety and efficiency are paramount on Daimler’s production lines. However, optimizing the speed of their testing system for many possible feature combinations is complex and time-consuming without a powerful algorithmic approach.

You are required to reduce the time that cars spend on the test bench. Others will work with a dataset representing different permutations of features in a Mercedes-Benz car to predict the time it takes to pass testing. Optimal algorithms will contribute to faster testing, resulting in lower carbon dioxide emissions without reducing Daimler’s standards.


I have done Data exploration, checking for Missing values and Outliers. Treat the outliers. Applied Label Encoding on categorical variables. I have scaled the data. Applied PCA to reduce the dimension of data but no effect of it on the result. In the prediction, I used Random Forest, KNN, and XGBoost modelling. In all of them, XGBoost has given good result.


More Details: Project - Mercedes-Benz Greener Manufacturing

Submitted By


Resume Up-Loader

Last Updated on May 3, 2021

About

Description:-

Ever you apply to an organisation with cv through mail but it might happen that specific organisation don't know that actually candidate need like job preference or type of job, so it get easier when we use this app called resume up-loader.

working model:-

It is my first self project using Django (python

framework) called Resume Up-loader .

where you put every detail about yourself ,job location photos,signature,CV,after submitting the information load on the server and next page you can look all your information and download the Resume also ,i am continuously working on it and upgrading that it list all the company on that preference job location for your current qualification and skill it help the candidate to know in which company is he/she is suitable for and it also company to know their candidate batter


Under a projects section

To make this single page website I have use the python web framework called Django

Django is a high-level Python Web framework that encourages rapid development and clean, pragmatic design. Built by experienced developers, it takes care of much of the hassle of Web development, so you can focus on writing your app without needing to reinvent the wheel. It’s free and open source.

I have also use HTML to define the structure of front-end and use style tag to make this beautiful

More Details: Resume up-loader

Submitted By


Wafer Sensors Faulty Detection

Last Updated on May 3, 2021

About

Project Description :

Detecting faulty sensors in wafers by using K means, Random forest, Decision tree algorithms


Problem Statement

To build a classification methodology to predict the quality of wafer sensors based on the given training data. 

Architecture

1.Data Description

2.Data validation

3.Data Insertion in Database

4.Model Training

5.Prediction Data Description

6.Data Validation

7.Data Insertion in Database

8.Prediction

9.Cloud Deployment

Data Description

The client will send data in multiple sets of files in batches at a given location. Data will contain Wafer names and 590 columns of different sensor values for each wafer. The last column will have the "Good/Bad" value for each wafer.

"Good/Bad" column will have two unique values +1 and -1. 

"+1" represents Bad wafer.

"-1" represents Good Wafer.

Apart from training files, we also require a "schema" file from the client, which contains all the relevant information about the training files such as:

Name of the files, Length of Date value in File Name, Length of Time value in File Name, Number of Columns, Name of the Columns, and their datatype.

Data Validation 

In this step, we perform different sets of validation on the given set of training files. 

a. Name Validation

b. Number of columns

c. Name of columns

d. Data type of columns

e. Null values in columns

If all the sets are as per requirement in schema file, we move such files to "Good_Data_Folder" else we move such files to "Bad_Data_Folder."

Data Insertion in Database

 1) Database Creation and connection -- Create a database with the given name passed.

2) Table creation in the database -- Table with name - "Good_Data", is created in the database for inserting the files in the "Good_Data_Folder" based on given column names and datatype in the schema file.

3) Insertion of files in the table -- All the files in the "Good_Data_Folder" are inserted in the above-created table. If any file has invalid data type in any of the columns, the file is not loaded in the table and is moved to "Bad_Data_Folder

Model Training

1) Data Export from Db - The data in a stored database is exported as a CSV file to be used for model training.

2) Data Preprocessing  

  a) Check for null values in the columns. If present, impute the null values using the KNN imputer.

  b) Check if any column has zero standard deviation, remove such columns as they don't give any information during model training.

3) Clustering --- KMeans algorithm is used to create clusters in the preprocessed data. The optimum number of clusters is selected by plotting the elbow plot, and for the dynamic selection of the number of clusters, we are using "KneeLocator" function. The idea behind clustering is to implement different algorithms

  To train data in different clusters. The Kmeans model is trained over preprocessed data and the model is saved for further use in prediction.

4) Model Selection --- After clusters are created, we find the best model for each cluster. We are using two algorithms, "Random Forest" and "XGBoost". For each cluster, both the algorithms are passed with the best parameters derived from GridSearch. We calculate the AUC scores for both models and select the model with the best score. Similarly, the model is selected for each cluster. All the models for every cluster are saved for use in prediction.

 Prediction Data Description


Client will send the data in multiple set of files in batches at a given location. Data will contain Wafer names and 590 columns of different sensor values for each wafer.

Apart from prediction files, we also require a "schema" file from client which contains all the relevant information about the training files such as:

Name of the files, Length of Date value in FileName, Length of Time value in FileName, Number of Columns, Name of the Columns and their datatype.


Then again we repeat steps 2,3

 Data Validation  

Data Insertion in Database 


Finally we go for


 Prediction

 

1) Data Export from Db - The data in the stored database is exported as a CSV file to be used for prediction.

2) Data Preprocessing   

  a) Check for null values in the columns. If present, impute the null values using the KNN imputer.

  b) Check if any column has zero standard deviation, remove such columns as we did in training.

3) Clustering - KMeans model created during training is loaded, and clusters for the preprocessed prediction data is predicted.

4) Prediction - Based on the cluster number, the respective model is loaded and is used to predict the data for that cluster.

5) Once the prediction is made for all the clusters, the predictions along with the Wafer names are saved in a CSV file at a given location and the location is returned to the client.



Deployment


We will be deploying the model to the Pivotal Cloud Foundry platform. Not only Pivotal but also we can use Heroku, AWS, Azure, GCP Platforms


Among the above all platforms Heroku is only a free open source platform for Deployment with unlimited storage.


Pivotal is a free open source before september 2020 ,Now it became a paid paltform


AWS, Azure, GCP these are all have free deployment source but with limited access


More Details: WAFER SENSORS FAULTY DETECTION