Tic Tac Toe Game

Last Updated on May 3, 2021

About

This game is built using python,A best interface

More Details: TIC TAC TOE GAME

Submitted By


Share with someone who needs it

Ai Real Time Car And Pedestrian Tracking App

Last Updated on May 3, 2021

About

AI REAL-TIME CAR AND PEDESTRIAN DETECTING APP USING PYTHON AND importing OpenCV

A real-time app using python as the programming language with importing open cv.

Learning from this:

  1. Haar features and algorithms
  2. how the haar cascade algorithm works in real-time upon grayscaled images
  3. why it works better on grayscaled images than taking colored frames instead.
  4. simple lines of code can do magic just putting the right things at right places

The result from this:

  1. we can detect images of person and vehicle and identify them in real-time webcam support to get the real time frame or taking the video as the import
  2. multiple real-time images can be detected and also with regular changing of dimensions
  3. this can lead to avoidance of the accident as also suggested by the tesla in their dashcam video


Challenges faced

  1. the most important challenge is to train the data and it's time-consuming so to build a simple prototype taking OpenCV trained data is beneficial as it saves lots of time.
  2. haar algorithm how it works is again one of the most important challenges as it has to be quite accurate to detect the face in real-time
  3. importing OpenCV required installation of multiple packages and different versions of python have different versions of that library.
  4. detecting person with nonliving vehicles is itself a challenge to make the training data in its work for both using two different cascade classifiers



More Details: AI real time car and pedestrian tracking app

Submitted By


Jeevika Special Purpose Vehicle For Agricultural Transformation (Jspvat)

Last Updated on May 3, 2021

About

JSPVAT, a program initiated in 2020 to support the Bihar state government’s rural livelihoods project, JEEViKA, aims to diversify and enhance household incomes and improve nutrition and sanitation for tens of thousands of farming households. JSPVAT has already facilitated an important systems-level partnership between JEEViKA and the World Bank to strengthen the agricultural and livestock market ecosystems in Bihar.

Reaching 50,000 farmers, JSPVAT’s work includes catalyzing key institutional changes to support market linkages and testing innovative private-sector models and approaches for inclusive agricultural transformation. The latter includes digital solutions related to procurement, quality testing, and access to finance and technologies, with farmer producer companies serving as a point for engagement with smallholder farmers. JSPVAT has also designed and supported the implementation of interventions to strengthen market ecosystems for fruits, vegetables, high-value crops, and livestock.

In the very first year, smallholder farmers supported by JEEViKA and JSPVAT engaged in expanded trade with institutional buyers of maize (more than 3,610 metric tons) and fruits and vegetables (210 metric tons) and procured a greater amount of agricultural inputs (more than 220 metric tons). JSPVAT introduced derivative trading on the NCDEX platform, enabling farmers to realize prices 6 to 8 percent higher than before. JSPVAT also facilitated access to public funding for participating farmer collectives—among the first such instances in the country.

In its first year, JSPVAT reached 7.5 million rural women from poor and marginalized-caste households through self-help groups formed under JEEViKA. This has expanded their access to financial services, value chains in the agriculture and nonfarm sectors, and nutrition and sanitation services.

More Details: JEEViKA Special Purpose Vehicle for Agricultural Transformation (JSPVAT)

Submitted By


Human Computer Interaction Using Iris,Head And Eye Detection

Last Updated on May 3, 2021

About

HCI stands for the human computer interaction which means the interaction between the humans and the computer.

We need to improve it because then only it would improve the user interaction and usability. A richer design would encourage users and a poor design would keep the users at bay.

We also need to design for different categories of people having different age,color,gender etc. We need to make them accessible to older people.

It is our moral responsibility to make it accessible to disabled people.

So this project tracks our head ,eye and iris to detect the eye movement by using the viola Jones algorithm.But this algorithm does not work with our masks on as it calculated the facial features to calculate the distance.

It uses the eucledian distance to calculate the distance between the previous frame and the next frame and actually plots a graph.

It also uses the formula theta equals tan inverse of b/a to calculate the deviation.

Here we are using ANN algorithm because ANN can work with incomplete data. Here we are using constructive or generative neural networks which means it starts capturing our individual images at the beginning to create our individual patterns and track the eye.

Here we actually build the neural network and train it to predict

Finally we convert it to mouse direction and clicks and double clicks on icons and the virtual keyboard.

As a contributing or moral individuals it is our duty to make devices compatible with all age groups and differently abled persons.

More Details: Human Computer Interaction using iris,head and eye detection

Submitted By


Navassist Ai

Last Updated on May 3, 2021

About

Incorporating machine learning, and haptic feedback, NavAssistAI detects the position and state of a crosswalk light, which enables it to aid the visually impaired in daily navigation.

Inspiration

One day, we were perusing youtube looking for an idea for our school's science fair. On that day, we came across a blind YouTuber named Tommy Edison. He had uploaded a video of himself attempting to cross a busy intersection on his own. It was apparent that he was having difficulty, and at one point he almost ran into a street sign. After seeing his video, we decided that we wanted to leverage new technology to help people like Tommy in daily navigation, so we created NavAssist AI.

What it does

In essence, NavAssist AI uses object detection to detect both the position and state of a crosswalk light (stop hand or walking person). It then processes this information and relays it to the user through haptic feedback in the form of vibration motors inside a headband. This allows the user to understand whether it is safe to cross the street or not, and in which direction they should face when crossing.

How we built it

We started out by gathering our own dataset of 200+ images of crosswalk lights because there was no existing library for those images. We then ran through many iterations on many different models, training each model on this data set. Through the different model architectures and iterations, we strove to find a balance between accuracy and speed. We eventually discovered an SSDLite Mobilenet model from the TensorFlow model zoo had the balance we required. Using transfer learning and many iterations we trained a model that finally worked. We implemented it onto a raspberry pi with a camera, soldered on a power button and vibration motors, and custom-designed a 3D printed case with room for a battery. This made our prototype wearable device.

Challenges we ran into

When we started this project, we knew nothing about machine learning or TensorFlow and had to start from scratch. However, with some googling and trying stuff out, we were able to figure out how to implement TensorFlow for our project with relative ease. Another challenge was collecting, preparing, and labelling our data set of 200+ images. Although, our most important challenge was not knowing what it's like to be visually impaired. To overcome this, we had to go out to people in the blind community and talk to them so that we could properly understand the problem and create a good solution.

Accomplishments that we're proud of

  • Making our first working model that could tell the difference between stop and go
  • Getting the haptic feedback implementation to work with the Raspberry Pi
  • When we first tested the device and successfully crossed the street
  • When we presented our work at TensorFlow World 2019

All of these milestones made us very proud because we are progressing towards something that could really help people in the world.

What we learned

Throughout the development of this project, we learned so much. Going into it, we had no idea what we were doing. Along the way, we learned about neural networks, machine learning, computer vision, as well as practical skills such as soldering and 3D CAD. Most of all, we learned that through perseverance and determination, you can make progress towards helping to solve problems in the world, even if you don't initially think you have the resources or knowledge.

What's next for NavAssistAI

We hope to expand its ability for detecting objects. For example, we would like to add detection for things such as obstacles so that it may aid in more than just crossing the street. We are also working to make the wearable device smaller and more portable, as our first prototype can be somewhat burdensome. In the future, we hope to eventually reach a point where it will be marketable, and we can start helping people everywhere.

More Details: NavAssist AI

Submitted By


A Review On Weather Forecasting Techniques Using Machine Learning

Last Updated on May 3, 2021

About

Weather depicts the atmospheric conditions of a particular place at a particular time. The basic weather elements comprise of temperature, wind, pressure, cloudiness and humidity. Every day, the Meteorological Department prepares weather maps for the upcoming day with the help of the data obtained from various weather stations around the world. Weather forecasts help in taking measures in advance in case of the probability of bad weather and in planning your day ahead.

 

Different instruments are used to measure various weather elements like, a thermometer is used to measure the temperature, whereas, a barometer is used to measure pressure. Similarly, a wind vane is used to find the direction of wind and a rain gauge is used to measure the amount of rainfall. Thus, with the help of the data collected through these instruments we get the weather forecast in the form of weather charts.

 

In order to decrease so much manual labour, these weather forecasting techniques are now getting replaced with machine learning models that can predict future weather quite accurately with the help of previously collected data. In this report, we are discussing some of the weather forecasting techniques that are most-likely to be used in order to get accurate weather predictions result. Herein we are comparing the results of the various models, just to get the best results.

 

Keywords: Weather Forecasting, ARIMA, Holt Linear, Holt Winter, Stationarity, Dickey- Fuller


More Details: A REVIEW ON WEATHER FORECASTING TECHNIQUES USING MACHINE LEARNING

Submitted By