Tic-Tac-Toe Ai Bot Using Reinforcement Learning

Last Updated on May 3, 2021

About

We created a self-learning ai bot in the game of tic-tac-toe using reinforcement learning.In this project, we used technologies like python and hosted it on web using google sites.We used 2 AI algorithms-minimax algorithm to assign weights for each outcome and used alpha-beta pruning to select the best outcome.This bot learns at real-time depending on the humans move and has won more than 90% of the games played against more than 50 people.


More Details: Tic-Tac-Toe AI Bot using Reinforcement Learning

Submitted By


Share with someone who needs it

Convolutional Neural Network Application To Classify Identical Twins(December 2019-March 2020) (Python,Ml)

Last Updated on May 3, 2021

About

-Transfer learning is employed to classify the images of identical twins.

-Transfer learning basically means using the knowledge acquired during a previous problem solving for finding a solution to a new problem in hand.

-Weights and biases values are used with some fine tuning.

-Pre-trained Convolutional Neural Network models which are developed for image recognition tasks on ImageNet data set, provided by Keras API is used as feature extractor preprocessor.

-All these models are proven to be very efficient in image recognition task.They have showcased very high accuracy on ImageNet dataset.The labels they have given for images were highly accurate with very less error percentage.

-Only convolutional base layers are used here,that is fully connected layers are not included here.

-Fully connected layer is not included and a new fully connected layer is addee at the end for the required categorisation of the data.

-During dataset building-collected images of the identical twins.Two categoried were defined in this way.

-VGG19 is used as a standalone program to extract features from the dataset.

-Feature vectors for training dataset is obtained and mean feature vector for both categories were calculated.

-Testing is done by comparing the feature vectors of testing data with mean feature vectors of each category using cosine similarity.

-Obtained a fair accuracy while testing. 

More Details: Convolutional Neural Network Application to classify Identical Twins(December 2019-March 2020) (Python,ML)

Submitted By


Bakery Management Api

Last Updated on May 3, 2021

About

# Bakery Management System


This **Bakery Management System** is based on Django Rest framework and Uses the Token Authentications.

For better understanding read the documentation at [docs](https://documenter.getpostman.com/view/14584052/TWDdjZYb).<br>

The project is live at [Bakery](https://bakery-management-api.herokuapp.com/).


## Steps to run the API:


1. Install requirements.txt 

2. Run- python manage.py makemigrations

3. Run- python manage.py migrate

4. Run- python manage.py runserver 




## API-OVERVIEW


Now enter http://127.0.0.1:8000/ in your Browser this will give the details about the functionality offered.

To perform any of the operations mentioned just add the corresponding relative url to this http://127.0.0.1:8000/ .


***Note : All the endpoints corresponding to Ingredients and Dishes are only accessible to ADMIN.***





## MANAGING THE ACCOUNTS REGISTRATION AND AUTHENTICATION




### Registering a ADMIN


We can only register admin through the Django admin panel. To acces Django Admin panel you have to create a superuser

Follows these steps to register an ADMIN user:

1. python manage.py createsuperuser

2. Fill all details(username ,email and pasword)

3. Now got to http://127.0.0.1:8000/admin/ and login through the credentials you just entered.

4. Register admin through the USERS section(please tick the is_staff then only you will be considered as ADMIN)



### Registering a CUSTOMER


URL - http://127.0.0.1:8000/accounts/register/ REQUEST-TYPE =[POST] **:**


This uses POST request and expects username,email,password,first_name,last_name to be entered through JSON object or a Form data.The username needs to be UNIQUE



### LOGGING IN A USER


URL - http://127.0.0.1:8000/accounts/login/ REQUEST-TYPE =[POST] **:**


This uses POST request and expects username and password.After successfull login this will return a Token and Expiry.

Expiry denotes for how long is the token valid ,after the expiry you need to login again.



### LOGGING OUT A USER


URL - http://127.0.0.1:8000/accounts/logout/ REQUEST-TYPE =[] **:**

For this provide the token in the header.The user whose token you entered will be logged out.






## OPERATIONS ON INGREDIENTS(ACCESSIBLE ONLY TO ADMINS)



### Adding an Ingredient


URL - http://127.0.0.1:8000/ingredients/ REQUEST-TYPE =[POST]  **:**

This uses POST request and expects name,quantity,quantity_type,cost_price to be entered through JSON object or a Form data.The name needs to be UNIQUE

and Django adds a primary key by name "id" by default.The quantity_type contains three choices only in which you can enter a single one either 'kg' for

kilogram ,'lt' for litre and "_" for only numbers.



### Get list of all Ingredients


URL - http://127.0.0.1:8000/ingredients/  REQUEST-TYPE =[GET]  **:**

This returns a Json value containing the list of all ingredients.



### Getting details of a single Ingredients


URL - http://127.0.0.1:8000/ingredients/id/ REQUEST-TYPE =[GET]  **:**

The "id" mentioned in the above url must be an integer referring to the "id" of the ingredient you want to fetch.This returns details of the 

ingredient you mentioned.



### Deleting a single Ingredients


URL - http://127.0.0.1:8000/ingredients/id/ REQUEST-TYPE =[DELETE]  **:**

The "id" mentioned in the above url must be an integer referring to the "id" of the ingredient you want to fetch.This deletes the 

ingredient you mentioned.






## OPERATIONS ON MENU(ACCESSIBLE ONLY TO ADMINS)




### Adding an dish to menu


URL - http://127.0.0.1:8000/menu/ REQUEST-TYPE =[POST]  **:**

This uses POST request and expects name , quantity , description , cost_price , selling_price , ingredients to be entered through JSON object or a Form data.

The name needs to be UNIQUE and Django adds a primary key by name "id" by default.The ingredients field can contain multiple ingredients id.



### Get list of all dishes(Available to CUSTOMER also)


URL - http://127.0.0.1:8000/menu/  REQUEST-TYPE =[GET]   **:**

This returns a Json value containing the list of details of all dishes.


***Note-This API depend on the type of User logged in. If the Customer user is logged in than this will the name and prices only***



### Getting details of a single Dish


URL - http://127.0.0.1:8000/menu/id/ REQUEST-TYPE =[GET]   **:**

The "id" mentioned in the above url must be an integer referring to the "id" of the Dish you want to fetch.This returns details of the 

Dish you mentioned.



### Deleting a single Dish


URL - http://127.0.0.1:8000/ingredients/id/ REQUEST-TYPE =[DELETE]  **:**

The "id" mentioned in the above url must be an integer referring to the "id" of the Dish you want to fetch.This deletes the 

Dish you mentioned






## OPERATIONS ON ORDER(ACCESSIBLE TO THE CUSTOMER )



### Adding/Placing an order 


URL - http://127.0.0.1:8000/order/ REQUEST-TYPE =[POST]   **:** 

This uses POST request and expects orderby,items_ordered to be entered through JSON object or a Form data.Django adds a primary key by name "id" by default.

The items_ordered field can contain multiple dishes id.



### Getting details of a single Order 


URL - http://127.0.0.1:8000/order/id/ REQUEST-TYPE =[GET]   **:**  

The "id" mentioned in the above url must be an integer referring to the "id" of the Order you want to fetch.This returns details of the 

Order you mentioned.


### Deleting a single Order 


URL - http://127.0.0.1:8000/order/ REQUEST-TYPE =[DELETE]   **:**  

The "id" mentioned in the above url must be an integer referring to the "id" of the Order you want to delete.This deletes the 

Order you mentioned


### Order History 


URL - http://127.0.0.1:8000/order/history/ REQUEST-TYPE =[GET]  **:**  

This will return the all the orders placed by the Customer making the request.(Latest first) 


More Details: Bakery Management API

Submitted By


Government Fund Tracking System Using Blockchain

Last Updated on May 3, 2021

About

The main idea behind the project is to track the funds hierarchically i.e from central government to the common man including in this chain. We have considered four hierarchical components which are: Central government, state government, Contractor, resource provider/dealer. In the beginning, the budgets which would get finalized in the house will be uploaded according to their respective category. After funds allocation state government will instigate the required projects by documenting them and will send the document to the central government. Now the Central government will verify the project details and if satisfied, they will grant the project funds to the state government else they can reject the project. After receiving funds from the central government, the state government will open the tenders for the contractor and by proper bidding system the contractor will be chosen for the specific project. As bidding and tender allocation will be carried out by an automation bidding system with no human intervention involved, it would reduce corruption. Government committee will check the amount of work done synchronously and will mark every progress by submitting a brief report to the hierarchical officer, who will add it to the blockchain. In this report the progress can be portrayed in the form of images, videos, written plan of the building or structure, etc. To get the payment the contractor will have to submit a form of his total spendings with proper distribution over the duration. This form details will then be checked by the respective authority of the state government and then will initiate the payment to the contractor. In this way doing work over a period gets paid, this process will repeat until a particular work is being done completely.

More Details: Government Fund Tracking System using blockchain

Submitted By


Wafer Sensors Faulty Detection

Last Updated on May 3, 2021

About

Project Description :

Detecting faulty sensors in wafers by using K means, Random forest, Decision tree algorithms


Problem Statement

To build a classification methodology to predict the quality of wafer sensors based on the given training data. 

Architecture

1.Data Description

2.Data validation

3.Data Insertion in Database

4.Model Training

5.Prediction Data Description

6.Data Validation

7.Data Insertion in Database

8.Prediction

9.Cloud Deployment

Data Description

The client will send data in multiple sets of files in batches at a given location. Data will contain Wafer names and 590 columns of different sensor values for each wafer. The last column will have the "Good/Bad" value for each wafer.

"Good/Bad" column will have two unique values +1 and -1. 

"+1" represents Bad wafer.

"-1" represents Good Wafer.

Apart from training files, we also require a "schema" file from the client, which contains all the relevant information about the training files such as:

Name of the files, Length of Date value in File Name, Length of Time value in File Name, Number of Columns, Name of the Columns, and their datatype.

Data Validation 

In this step, we perform different sets of validation on the given set of training files. 

a. Name Validation

b. Number of columns

c. Name of columns

d. Data type of columns

e. Null values in columns

If all the sets are as per requirement in schema file, we move such files to "Good_Data_Folder" else we move such files to "Bad_Data_Folder."

Data Insertion in Database

 1) Database Creation and connection -- Create a database with the given name passed.

2) Table creation in the database -- Table with name - "Good_Data", is created in the database for inserting the files in the "Good_Data_Folder" based on given column names and datatype in the schema file.

3) Insertion of files in the table -- All the files in the "Good_Data_Folder" are inserted in the above-created table. If any file has invalid data type in any of the columns, the file is not loaded in the table and is moved to "Bad_Data_Folder

Model Training

1) Data Export from Db - The data in a stored database is exported as a CSV file to be used for model training.

2) Data Preprocessing  

  a) Check for null values in the columns. If present, impute the null values using the KNN imputer.

  b) Check if any column has zero standard deviation, remove such columns as they don't give any information during model training.

3) Clustering --- KMeans algorithm is used to create clusters in the preprocessed data. The optimum number of clusters is selected by plotting the elbow plot, and for the dynamic selection of the number of clusters, we are using "KneeLocator" function. The idea behind clustering is to implement different algorithms

  To train data in different clusters. The Kmeans model is trained over preprocessed data and the model is saved for further use in prediction.

4) Model Selection --- After clusters are created, we find the best model for each cluster. We are using two algorithms, "Random Forest" and "XGBoost". For each cluster, both the algorithms are passed with the best parameters derived from GridSearch. We calculate the AUC scores for both models and select the model with the best score. Similarly, the model is selected for each cluster. All the models for every cluster are saved for use in prediction.

 Prediction Data Description


Client will send the data in multiple set of files in batches at a given location. Data will contain Wafer names and 590 columns of different sensor values for each wafer.

Apart from prediction files, we also require a "schema" file from client which contains all the relevant information about the training files such as:

Name of the files, Length of Date value in FileName, Length of Time value in FileName, Number of Columns, Name of the Columns and their datatype.


Then again we repeat steps 2,3

 Data Validation  

Data Insertion in Database 


Finally we go for


 Prediction

 

1) Data Export from Db - The data in the stored database is exported as a CSV file to be used for prediction.

2) Data Preprocessing   

  a) Check for null values in the columns. If present, impute the null values using the KNN imputer.

  b) Check if any column has zero standard deviation, remove such columns as we did in training.

3) Clustering - KMeans model created during training is loaded, and clusters for the preprocessed prediction data is predicted.

4) Prediction - Based on the cluster number, the respective model is loaded and is used to predict the data for that cluster.

5) Once the prediction is made for all the clusters, the predictions along with the Wafer names are saved in a CSV file at a given location and the location is returned to the client.



Deployment


We will be deploying the model to the Pivotal Cloud Foundry platform. Not only Pivotal but also we can use Heroku, AWS, Azure, GCP Platforms


Among the above all platforms Heroku is only a free open source platform for Deployment with unlimited storage.


Pivotal is a free open source before september 2020 ,Now it became a paid paltform


AWS, Azure, GCP these are all have free deployment source but with limited access


More Details: WAFER SENSORS FAULTY DETECTION

Hacksat

Last Updated on May 3, 2021

About

Imagine a satellite which enables anyone to avoid thinking in data transfer, energy and all of those nuisances.

What it does

HackSat consists of a prototype for a CubeSat blueprint which will allow anyone who wants to do any experimentation up in outer space to avoid worrying about how to send data or how to provide energy and start thinking about which data will be sent and when they will sent it.

It is also worth noticing that everything will be released under an Open Source License

How we built it

We designed the basic structure, based on the CubeSat specs provided by California Polytechnic State University and used by NASA to send low cost satellites.

We printed the structure by means of a couple 3d printers.

We handcrafted all electronics by using a combination of 3 Arduinos, which required us to search for low consuming components, in order to maximize the battery power, we also work on minimize the energy consumption for the whole satellite.

We opted to use recycled components, like solar panels, cables, battery, converter...

We worked a lot on the data transfer part, so it allows the Sat to be sleeping by the most part, on an effort to increase even more the battery life.

And almost 24hours of nonstop work and a lot of enthusiasm!!

Challenges we ran into

We find mostly challenging the electronics, because our main objective was to get the optimal energy out of our battery and avoid draining it too fast.

Another point worth mentioning was the data transfer between the experiment section and the Sat section, because we wanted to isolate each part as much as possible from the other, so the experiment just need to tell the Sat to send the data and nothing more.

Accomplishments that we are proud of

We are very proud to have accomplished the objective of making a viable prototype, even though we have faced some issues during these days, nonetheless we managed to overcome all of those issues and as a consequence we have grown wiser and our vision has become wider.

What we learned

During the development for HackSat, we have learned a lot about radio transmission, a huge lot about serial port and how to communicate data between 3 different micros, using 2 different protocols.

What's next for HackSat

The first improvement that should be made is fix some issues we encountered with the measures of our designs, which have required some on site profiling.

Another obvious improvement is update the case so it is made of aluminium instead of plastic, which is the first blocking issue at the moment for HackSat to be launched.

Finally, we would change the hardware so it has more dedicated hardware which most likely will allow us to optimise even the battery consumption and global lifespan for the Sat.

More Details: HackSat

Submitted By