Covid Data Analysis

Last Updated on May 3, 2021


Data analysis and visualization on covid-19 data

More Details: Covid data analysis

Submitted By

Share with someone who needs it

Smart Glasses For Visually Impaired

Last Updated on May 3, 2021


This is our Second year Hardware & Software Tools project . We wanted to invent 


something that would benefit handicapped people in some way. He came up with this idea 


for glasses that could help blind people sense if there was an object in front of them that 


they might hit their head on. The white cane that they use when walking is used for helping 


them navigate the ground but does not do much for up above. Using an Arduino Pro Mini


MCU, Ultrasonic Sensor, and a buzzer, we created these glasses that will sense the distance 


of an object in front and beep to alert the person that something is in front of them. Simple


and inexpensive to make. Credit to for some of the parts.


These “Smart Glasses” are designed to help the blind people to read and translate the typed text 

which is written in the English language. These kinds of inventions consider a solution to 

motivate blind students to complete their education despite all their difficulties. Its main 

objective is to develop a new way of reading texts for blind people and facilitate their 

communication. The first task of the glasses is to scan any text image and convert it into audio 

text, so the person will listen to the audio through a headphone that’s connected to the glasses. 

The second task is to translate the whole text or some words of it by pressing a button that is 

connected to the glasses as well. The glasses used many technologies to perform its tasks which 

are OCR, (gTTS) and Google translation. Detecting the text in the image was done using the 

OpenCV and Optical Character Recognition technology (OCR) with Tesseract and Efficient and 

Accurate Scene Text Detector (EAST). In order to convert the text into speech, it used Text to 

Speech technology (gTTS). For translating the text, the glasses used Google translation API. The 

glasses are provided by Ultrasonic sensor which is used to measure the required distance 

between the user and the object that has an image to be able to take a clear picture. The picture 

will be taken when the user presses the button. Moreover, the motion sensor was used to 

introduce the user to the university’s halls, classes and labs locations using Radio-frequency 

identification (RFID) reader. All the computing and processing operations were done using the 

Raspberry Pi 3 B+ and Raspberry pi 3 B. For the result, the combination of using OCR with

EAST detector provide really high accuracy which showed the ability of the glasses to recognize 

almost 99% of the text. However, the glasses have some drawbacks such as: supporting only the 

English language and the maximum distance of capturing the images is between 40-150 cm. As a 

future plan, it is possible to support many languages and enhance the design to make it smaller 

and more comfortable to wear.


More Details: Smart Glasses for visually impaired

Submitted By

Design And Analysis Of Automobile Chasis

Last Updated on May 3, 2021


Completed under the guidance of Dr. Shailesh Ganpule, Department of Mechanical and Industrial Engineering during August 2019 to November 2019. The objective of this design analysis is to find out the best material and most suitable cross-section for a common “Goods Carrier Truck” ladder chassis with the constraints of maximum shear stress, equivalent stress and deflection of the chassis under maximum load condition. In present the Ladder chassis which are used for making buses and trucks are C and I cross section type, but here we also analysed the Box type and Tube Type. In Trucks generally heavy amounts of loads are carried due to which there are always possibilities of being failure/fracture in the chassis/frame. Therefore Chassis with high strength cross section is needed to minimize the failures including factor of safety in design. The different vehicle chassis have been modeled by considering three different cross-sections namely C, I , Rectangular Box (Hollow) and Tubular type cross sections. The problem to be dealt with for this dissertation work is to Design and Analyze using suitable CAD software and Ansys 19.2  for ladder chassis. The report is the work performed towards the optimization of the Truck chassis with constraints of stiffness and strength. The modeling is done using Solid works, and analysis is done using Ansys 19.2 .. The overhangs of the chassis are calculated for the stresses and deflections analytically are compared with the results obtained with the analysis software. Involved in designing of Heavy Loaded Vehicle chassis in SolidWorks with stress simulation and strain analysis in Ansys. Carried out Failure Analysis using Von Mises Criterion to obtain their sustainability. Performed Convergence Analysis to select the most optimized model with the desired factor of safety. Compared software(practical) value obtained with theoretical value obtained.


Submitted By

Voice Controlled Car(Micro-Python)

Last Updated on May 3, 2021


The Android Application is connected to the Bluetooth module (HC-05)present on the Car via Bluetooth. The commands are sent to the car using push buttons or voice commands present on the android application. At the receiving end two dc motors are interfaced to the microcontroller where they are used for the movement of the vehicle. The RF transmitter of the Bluetooth can take either switch press or voice commands which are converted to encoded digital data for the advantage of adequate range (up to 100 meters) from the car. The receiver decodes the data before feeding it to another microcontroller to drive DC motors via motor driver IC for necessary work. This technology has an advantage over long communication range as compared to RF technology. Further the project can be developed using IoT technology where a user can control the car from any corner of the world. Voice recognition uses recordings of human voices, but they do different things with it. Voice recognition stripes out personal differences to detect the words. Speech recognition typically disregards the language and meaning to detect the physical person behind the speech. For our project, if we want to make it user friendly then Voice Recognition is the best methodology to control this car. The proposed topic involves voice recognizing. Voice recognition is the process of capturing spoken words and commands using a microphone or telephone and converting them into a digitally stored set of words. Two factors decide the accuracy of the proposed voice recognition system: Accuracy in detecting the human words and processing those words at the desired speed so that the commands are executed with the least delay.

More Details: Voice Controlled Car(micro-python)

Submitted By

Emotional Analysis Based Content Recommendation System

Last Updated on May 3, 2021


As the saying goes, “We are what we see”; the content we see may have an adverse effect on our behavior sometimes. Especially in a country like India, where numerous films and TV series are highly prominent, there are great chances of watching explicit or disturbing content randomly. This may have adverse effects on behavior of people, especially children. And we also know “Prevention is better than cure”. Preventing inappropriate content from going online can be more effective than banning them after release.

To achieve this, we aim to create a content filtering and recommendation system that either recommends a film or TV series or alerts a user with a warning message saying it’s not recommended to watch. Netflix or any other Over-the-top (OTT) platforms perform a filtering process before they buy digital rights for any content. This is where our tool comes handy. It detects absurd or hard emotion inducing content with the help of human emotions. Through this project we aim to create a content detector based on human emotion recognition. We will project scenes to test audience and capture their live emotions.

Then we use “Facebook Deep Face”, a pre-defined CNN based face recognition and facial emotion analysis model to identify faces and analyze their emotions. We use “Deep Learning” methods to recognize facial expressions and then make use of Circumplex Model proposed by James Russell to classify emotions based on arousal and valence values. Based on majority emotion that is projected by audience we would either recommend or not recommend the content for going on-air. This system prevents inappropriate content from going on-air

More Details: Emotional Analysis Based content recommendation system

Submitted By

Rock Paper Scissors

Last Updated on May 3, 2021


This is a handy game which is generally played between 2 players and which is certainly loved by every child on the earth. Each player performs 1 out of 3 shapes that is Rock, Paper, Scissors.

Rock beats scissors, Paper beats Rock and Scissors beat Paper.

There are 2 outcomes of this game which is loose or win. Random module is used in this game project. The random module will select a value between the given range. So as to install the random module, simply go to command prompt and type “pip install random”

There are 2 functions in this code which is “choose_option_for_user" and "computer_option".

In first function, it allows the player to choose one among rock paper and scissors and in the second function it allows the computer to make its choice. Here, the computer will choose the option randomly with the help of random module. And the last is the while loop, where we determine whether the player or the computer wins the round or whether it’s a tie.

The main logic of the game is that the player will choose their choice then the computer will choose the choice then both the choices will be compared and winner will be determined. If the player wants to play again then they can choose yes/no in it and if they doesn’t want to play it will break the loop.


More Details: Rock Paper Scissors

Submitted By