Academic Project

Last Updated on May 3, 2021

About

This project was done in my masters.

We developed a fitness application which in android . In this application we used mysql , java in android studio and adobeXD to create prototype of our application pages.

In this application user set a goal for their desired fitness.

Application will assist user to track their calorie consumption as well as they will food suggestion to reach a daily calorie goal.

the link is provided in below.


More Details: academic project

Submitted By


Share with someone who needs it

Python Project: Pillow, Tesseract, And Opencv

Last Updated on May 3, 2021

About

The Project

Take a ZIP file) of images and process them, using a library built into python that you need to learn how to use. A ZIP file takes several different files and compresses them, thus saving space, into one single file. The files in the ZIP file we provide are newspaper images (like you saw in week 3). Your task is to write python code which allows one to search through the images looking for the occurrences of keywords and faces. E.g. if you search for "pizza" it will return a contact sheet of all of the faces which were located on the newspaper page which mentions "pizza". This will test your ability to learn a new (library), your ability to use OpenCV to detect faces, your ability to use tesseract to do optical character recognition, and your ability to use PIL to composite images together into contact sheets.

Each page of the newspapers is saved as a single PNG image in a file called images.zip. These newspapers are in english, and contain a variety of stories, advertisements and images. Note: This file is fairly large (~200 MB) and may take some time to work with, I would encourage you to use small_img.zip for testing.

More Details: Python Project: pillow, tesseract, and opencv

Submitted By


Navassist Ai

Last Updated on May 3, 2021

About

Incorporating machine learning, and haptic feedback, NavAssistAI detects the position and state of a crosswalk light, which enables it to aid the visually impaired in daily navigation.

Inspiration

One day, we were perusing youtube looking for an idea for our school's science fair. On that day, we came across a blind YouTuber named Tommy Edison. He had uploaded a video of himself attempting to cross a busy intersection on his own. It was apparent that he was having difficulty, and at one point he almost ran into a street sign. After seeing his video, we decided that we wanted to leverage new technology to help people like Tommy in daily navigation, so we created NavAssist AI.

What it does

In essence, NavAssist AI uses object detection to detect both the position and state of a crosswalk light (stop hand or walking person). It then processes this information and relays it to the user through haptic feedback in the form of vibration motors inside a headband. This allows the user to understand whether it is safe to cross the street or not, and in which direction they should face when crossing.

How we built it

We started out by gathering our own dataset of 200+ images of crosswalk lights because there was no existing library for those images. We then ran through many iterations on many different models, training each model on this data set. Through the different model architectures and iterations, we strove to find a balance between accuracy and speed. We eventually discovered an SSDLite Mobilenet model from the TensorFlow model zoo had the balance we required. Using transfer learning and many iterations we trained a model that finally worked. We implemented it onto a raspberry pi with a camera, soldered on a power button and vibration motors, and custom-designed a 3D printed case with room for a battery. This made our prototype wearable device.

Challenges we ran into

When we started this project, we knew nothing about machine learning or TensorFlow and had to start from scratch. However, with some googling and trying stuff out, we were able to figure out how to implement TensorFlow for our project with relative ease. Another challenge was collecting, preparing, and labelling our data set of 200+ images. Although, our most important challenge was not knowing what it's like to be visually impaired. To overcome this, we had to go out to people in the blind community and talk to them so that we could properly understand the problem and create a good solution.

Accomplishments that we're proud of

  • Making our first working model that could tell the difference between stop and go
  • Getting the haptic feedback implementation to work with the Raspberry Pi
  • When we first tested the device and successfully crossed the street
  • When we presented our work at TensorFlow World 2019

All of these milestones made us very proud because we are progressing towards something that could really help people in the world.

What we learned

Throughout the development of this project, we learned so much. Going into it, we had no idea what we were doing. Along the way, we learned about neural networks, machine learning, computer vision, as well as practical skills such as soldering and 3D CAD. Most of all, we learned that through perseverance and determination, you can make progress towards helping to solve problems in the world, even if you don't initially think you have the resources or knowledge.

What's next for NavAssistAI

We hope to expand its ability for detecting objects. For example, we would like to add detection for things such as obstacles so that it may aid in more than just crossing the street. We are also working to make the wearable device smaller and more portable, as our first prototype can be somewhat burdensome. In the future, we hope to eventually reach a point where it will be marketable, and we can start helping people everywhere.

More Details: NavAssist AI

Submitted By


Virtual Dental Clinic

Last Updated on May 3, 2021

About

Ongoing under the guidance of Dr. Sateesh Kumar Peddoju, Department of Computer Science & Engineering from November 2020 to present. In this project we are creating a platform using Nodejs where the patients can consult with dentists regarding their symptoms in a virtual environment made available via both a web-based application and mobile-based application compatible on android and ios devices. The patient will be able to easily connect to the dentists for timely collaboration and consultation according to their time and space feasibility. The patients can consult with a dentist of their choice via audio/video streaming and text-based messaging. The patients can receive diagnosis and prescription at a time and place more convenient to the patient. Patients will have to upload their current symptoms and the dentists, on the other hand, will analyze the patient’s reports and prior records to write and upload the prescriptions. The application will also maintain patients records for future reference in a secure database.  We ensured the functional and non-functional requirements and design for such an application with emphasis on efficiency, reliability, and security of the services provided by the application and the data stored. The developed application will allow the patients with a quick, easy, and secure way of consulting with a dentist of their choice.




More Details: Virtual Dental Clinic

Submitted By


Real Time Face Mask Detection In Python

Last Updated on May 3, 2021

About

About the Project

This COVID-19 pandemic has raised many concerns regarding health and our environment and to stop them from spreading is wearing mask in public places. Therefore, this issue to be addressed efficiently cannot be possible by humans single handedly.

Here's why:

  • Even if a team of people are gathered it would be difficult to keep a note of all people not wearing masks
  • Manual labor can be reduced and thus reducing the price of expenditure on hiring more people for a job which can be accomplished by machine
  • This can not only be used for mask detection but can be tweaked a bit and then used for attendance manager in workplaces or schools, etc.
More Details: Real time face mask detection in python

Submitted By


Govindasamy Bala - Iisc Bangalore | A Significant Step In Understanding Climate Change

Last Updated on May 3, 2021

About

We are currently living in what is considered as the Anthropocene, an age in which human activities have a significant impact on the planet. Perhaps the most serious damage inflicted by humans has been on its climate. Climate change, driven by an increase in the average surface temperature of Earth, results from a surge in radiative forcing—the difference between the energy received by Earth and the energy radiated back to space. The radiative forcing agents of the industrial era include greenhouse gases such as carbon dioxide (CO2) and methane, which trap the longwave radiation emitted by our planet. To measure how effective a forcing agent is in causing Earth’s climate to change, researchers use the concept of efficacy, defined as the ratio of global temperature change due to that particular forcing agent to the temperature change caused by CO2 for the same radiative forcing value.

In a new study, GovindasamyBala—one of India’s most well-known climate scientists—and his student, AngshumanModak, addressed the issue of the efficacy of the incident solar radiation relative to CO2.* Using a modelling approach, they considered climate system responses during three different time periods: a week, four months and a century.

“What we found was that the Sun is less effective than CO2 in causing climate change,” says Bala. In fact, the study shows that solar forcing is only 80% as effective as CO2 forcing. “This means that for the same radiative forcing, if CO2 causes 1 °C warming, the Sun causes only 0.8 °C warming,” he continues. This finding, Bala argues, is crucial not just for our understanding of the mechanisms of climate change but also for the formulation of more effective climate change policies. 

More Details: Govindasamy Bala - IISC Bangalore | A Significant Step in understanding climate change

Submitted By