Iit Bombay Demonstrates Conversion Of Nitrogen Generator To Oxygen Generator

Last Updated on May 3, 2021

About

In the wake of medical oxygen shortage to treat COVID-19 patients, the Indian Institute of Technology (IIT)-Bombay is piloting a new technology to convert a nitrogen unit into an oxygen generating unit.

The institute, in a statement released on Thursday, said the process relied on a simple technological intervention of converting a pressure swing adsorption (PSA) nitrogen unit into a PSA oxygen unit.

Initial tests at IIT-Bombay had shown promising results, the statement said.

“Oxygen production could be achieved at 3.5 atm pressure, with a purity level of 93%-96 %. This gaseous oxygen can be utilised for COVID-19-related needs across existing hospitals and upcoming COVID-19 specific facilities by providing a continuous supply of oxygen,” the institute said.

“It [conversion of nitrogen unit into an oxygen unit] has been done by fine-tuning the existing nitrogen plant set-up and changing the molecular sieves from carbon to zeolite,” the statement said quoting prof. Milind Atrey, dean (R&D), IITB, who led the project.

Mr. Atrey said such nitrogen plants, which take air from the atmosphere as raw material, are available in various industrial plants across India.

“Therefore, each of them could potentially be converted into an oxygen generator, thus helping us tide over the current public health emergency,” he said.

The pilot project is a collaborative effort among IIT-Bombay, Tata Consulting Engineers and Spantech Engineers, Mumbai, who deal with the PSA nitrogen and oxygen plant production, the statement said.

To undertake this study on an urgent basis, an MoU was signed among IIT-Bombay, Tata Consulting Engineers and Spantech Engineers to finalise a standard operating procedure that may be leveraged across the country, it said.

A PSA nitrogen plant in the refrigeration and cryogenics laboratory of the IIT was identified for conversion, to validate the proof of concept.

More Details: IIT Bombay demonstrates conversion of Nitrogen generator to Oxygen generator

Submitted By


Share with someone who needs it

Voice Of The Day

Last Updated on May 3, 2021

About

Inspiration

The format used to work well on the radio, so we wanted to recreate those memories on Alexa.

What it does

Players can listen to up to three voice clips of well-known people and/or celebrities talking every day, as well as view a blurred image of the celebrity. After each clip, Alexa will ask you who you think is talking, and you must try to answer correctly. This will earn you a score for the monthly and all-time leader boards. The player can ask Alexa for hints, or to skip that voice clip to move onto the next one. Users scores are awarded depending on how many incorrect answers they gave for that voice and whether they used a hint or not. Users can also ask to hear yesterday’s answers, in case they couldn’t get the answers on that day.

How I built it

To create the structure of the skill, we used the Alexa Skills Kit CLI.

We used Amazons S3 storage for all our in-game assets such as the audio clips and images.

We used the Alexa Presentation Language to create the visual interface for the skill.

We used the Amazon GameOn SDK to create monthly and all-time leader boards for all users to enter without any sign up.

Every day, free users will be given the ‘easy’ clip to answer. The set of clips each day will be selected dependant on the day of the year. Users who have purchased premium gain access to the ‘medium’ and ‘hard’ clips every day, as well as being able to ask for hints for the voices, skip a voice if they are stuck, and enter scores onto the leader boards.

Accomplishments that I’m proud of

As well as creating a high-quality voice-only experience, we developed the skill to be very APL focused, which we are very proud of. The visual assets we used for the project were very high quality and we were able to dynamically display the appropriate data for each screen within the skill. The content loaded depends on who is talking, as well as the difficulty of the voice that the user is answering. APL also allowed us to blur and unblur the celebrity images, as opposed to having separate blurred and unblurred images for each person.

We were also very pleased with how we implemented the GameOn SDK SDK into the skill. When the user submits a score, they have a random avatar created for them, and their score will be submitted under this avatar. This avoids any sign up to use the leader boards, allowing all users to use it easily.

GameOn SDK also allows us to create monthly and all-time competitions/leader boards which all users are automatically entered.

What I learned

I have learnt how to develop APL as well as better practices for structuring it more efficiently. For example, there are many APL views in the project, all of which are almost identical, what I have learnt that would be more effective in future projects would be to condense these down into one primary view that I would use for each screen and just use the appropriate data.

I have also been able to hone prompts to the user for upsells and showing the leader boards. Testing has shown that constant prompts on each play for these things can become tedious to the user, and so we have reduced the frequency of these for a much better user experience.

More Details: Voice of the Day

Submitted By


Loan Prediction

Last Updated on May 3, 2021

About

A Company wants to automate the loan eligibility process (real time) based on customer detail provided while filling online application form. These details are Gender, Marital Status, Education, Number of Dependents, Income, Loan Amount, Credit History and others. To automate this process, they have provided a dataset to identify the customers segments that are eligible for loan amount so that they can specifically target these customers. So in this project our main objective is to predict whether a individual is eligible for loan or not based on given dataset.


For simplicity i divided my projects into small parts-


  1. Data Collection :- I collected data from 'Anylitical Vidhya' as a CSV file. We have two CSV file one is train data which is used for training the data and other is test data which is used for prediction based on training of model.
  2. Import Libraries:- I import differnt Sklearn package for algorithm and different tasks.
  3. Reading data:- i read the data using pandas 'read csv()' function.
  4. Data Preprocessing -: In this part i first found missing values then i remove a column or imputed some value (mean, mode, median) According to the amount of data missing for a particular column.

I checked the unique value in each column. Then i did label encoding to convert all string types data to integer value. I used dummie function to convert each unique value to different columns . I find out correlation matrix which shows the correlation between columns to each other.

Then i split the data. I did analysis on each column and row of dataset.


Here i selected a classifier algorithm because it is a classification problem i.e. in this problem target value is of categorial datatype.


Then i create a model . I trained that model using Logistic regression Algorithm , which is a classification algorithm. I feed training dataset to model using Logistic regression algorithm. After creating model i did similiar data preprocessing to test dataset . And then i feed test dataset to trained model which predict the values of this test dataset. And then i found accuracy of this model using actual target value which is given in training dataset. and predict target value which we predict from test dataset.

After this i used another algorithm which is random forest classifier. i did traied the model using random forest classifier and then calculate the accuracy.

I compared the accuracy of both algorithm and i preffered algorithm which had better accuracy.


In this project i got 78.03% accuracy when i create model using random forest classifier and got 80.06% when i create model using logistic regression.


More Details: Loan prediction

Submitted By


Online Gardening Store

Last Updated on May 3, 2021

About

This is a project made in Nodejs, MySQL and some npm packages .The aim of the project is to provide gardening people a easy interface from where they could buy necessities for gardening through online. There are various categories of the products from which the user can buy them.

We have options of adding options into cart, modifying them as well as deleting the required items. We have user authentication also in the application, To make it easier for the customers while making a payment we have an option from where one can directly choose the saved cards for the payment, Taxes are also calculated on the sub total once obtained. As of now no payment integration is done. Once a user submits the order, he/she will also able to see the history of their previous orders.

Once a user registers in the application or even when he/she confirms a order a verification of the order as well as login is sent to the registered email-id and mobile numbers.

For future enhancement we have thought of :-

  • adding a filtering options
  • search feature
  • Take user input through some forms for their requirement and use NLP to retrieve the necessary products
  • A chatbot for the whole application for the customers if they have any queries
More Details: Online Gardening Store

Submitted By


Snake Game

Last Updated on May 3, 2021

About

The objective of this project is to build a snake game project using Python. In this python project, the player has to move a snake so it touches the red dot . If the snake touches itself or the border of the game then the game will over.

The following are the methods I used :

Turtle module, random module, time module, and concept of python by basically used in this project

Turtle module gives us a feature to draw on a drawing board

Random module will be used to generate random numbers

Time module is an inbuilt module in python. It provides the functionality of time.


The steps to build a snake game project in python:

The first is Importing libraries then we move to create a game screen and also the creation of snake and red dot. Keyboard binding is to be done next followed by the game main loop.


We require turtle, random, and time module to import

To create Game screen I used :-

  • title() :  will set the desired title of the screen
  • setup() : used to set the height and width of the screen
  • tracer(0) : will turn off the screen update
  • bgcolor()  : will set the background color
  • forward()  : will use to move the turtle in a forwarding direction for a specified amount
  • right() : used to turn the turtle clockwise and left() used to turn the turtle anticlockwise
  • penup() : will not draw while its move

For creating Game screen I used :-

  • Turtle() :  will be used to create a new turtle object
  • hideturtle() :  will use to hide the turtle
  • goto() :  used to move the turtle at x and y coordinates


Now by adding key binding in which the directions in which the snake will go and how will be decided.


If the snake touches the border of the game then the game will over. screen.clear() will delete all the drawing of the turtle on the screen



We successfully developed Snake game project in python.




More Details: Snake Game

Submitted By


Wafer Sensors Faulty Detection

Last Updated on May 3, 2021

About

Project Description :

Detecting faulty sensors in wafers by using K means, Random forest, Decision tree algorithms


Problem Statement

To build a classification methodology to predict the quality of wafer sensors based on the given training data. 

Architecture

1.Data Description

2.Data validation

3.Data Insertion in Database

4.Model Training

5.Prediction Data Description

6.Data Validation

7.Data Insertion in Database

8.Prediction

9.Cloud Deployment

Data Description

The client will send data in multiple sets of files in batches at a given location. Data will contain Wafer names and 590 columns of different sensor values for each wafer. The last column will have the "Good/Bad" value for each wafer.

"Good/Bad" column will have two unique values +1 and -1. 

"+1" represents Bad wafer.

"-1" represents Good Wafer.

Apart from training files, we also require a "schema" file from the client, which contains all the relevant information about the training files such as:

Name of the files, Length of Date value in File Name, Length of Time value in File Name, Number of Columns, Name of the Columns, and their datatype.

Data Validation 

In this step, we perform different sets of validation on the given set of training files. 

a. Name Validation

b. Number of columns

c. Name of columns

d. Data type of columns

e. Null values in columns

If all the sets are as per requirement in schema file, we move such files to "Good_Data_Folder" else we move such files to "Bad_Data_Folder."

Data Insertion in Database

 1) Database Creation and connection -- Create a database with the given name passed.

2) Table creation in the database -- Table with name - "Good_Data", is created in the database for inserting the files in the "Good_Data_Folder" based on given column names and datatype in the schema file.

3) Insertion of files in the table -- All the files in the "Good_Data_Folder" are inserted in the above-created table. If any file has invalid data type in any of the columns, the file is not loaded in the table and is moved to "Bad_Data_Folder

Model Training

1) Data Export from Db - The data in a stored database is exported as a CSV file to be used for model training.

2) Data Preprocessing  

  a) Check for null values in the columns. If present, impute the null values using the KNN imputer.

  b) Check if any column has zero standard deviation, remove such columns as they don't give any information during model training.

3) Clustering --- KMeans algorithm is used to create clusters in the preprocessed data. The optimum number of clusters is selected by plotting the elbow plot, and for the dynamic selection of the number of clusters, we are using "KneeLocator" function. The idea behind clustering is to implement different algorithms

  To train data in different clusters. The Kmeans model is trained over preprocessed data and the model is saved for further use in prediction.

4) Model Selection --- After clusters are created, we find the best model for each cluster. We are using two algorithms, "Random Forest" and "XGBoost". For each cluster, both the algorithms are passed with the best parameters derived from GridSearch. We calculate the AUC scores for both models and select the model with the best score. Similarly, the model is selected for each cluster. All the models for every cluster are saved for use in prediction.

 Prediction Data Description


Client will send the data in multiple set of files in batches at a given location. Data will contain Wafer names and 590 columns of different sensor values for each wafer.

Apart from prediction files, we also require a "schema" file from client which contains all the relevant information about the training files such as:

Name of the files, Length of Date value in FileName, Length of Time value in FileName, Number of Columns, Name of the Columns and their datatype.


Then again we repeat steps 2,3

 Data Validation  

Data Insertion in Database 


Finally we go for


 Prediction

 

1) Data Export from Db - The data in the stored database is exported as a CSV file to be used for prediction.

2) Data Preprocessing   

  a) Check for null values in the columns. If present, impute the null values using the KNN imputer.

  b) Check if any column has zero standard deviation, remove such columns as we did in training.

3) Clustering - KMeans model created during training is loaded, and clusters for the preprocessed prediction data is predicted.

4) Prediction - Based on the cluster number, the respective model is loaded and is used to predict the data for that cluster.

5) Once the prediction is made for all the clusters, the predictions along with the Wafer names are saved in a CSV file at a given location and the location is returned to the client.



Deployment


We will be deploying the model to the Pivotal Cloud Foundry platform. Not only Pivotal but also we can use Heroku, AWS, Azure, GCP Platforms


Among the above all platforms Heroku is only a free open source platform for Deployment with unlimited storage.


Pivotal is a free open source before september 2020 ,Now it became a paid paltform


AWS, Azure, GCP these are all have free deployment source but with limited access


More Details: WAFER SENSORS FAULTY DETECTION