Object Detection With Amazon Sagemaker

Last Updated on May 3, 2021


ain and deploy an object detector using Amazon Sagemaker. Sagemaker provides a number of machine learning algorithms ready to be used for solving a number of tasks. We will use the SSD Object Detection algorithm from Sagemaker to create, train and deploy a model that will be able to localize faces of dogs and cats from the popular IIIT-Oxford Pets Dataset.

More Details: Object Detection with Amazon Sagemaker

Submitted By

Share with someone who needs it

Smart Health Monitoring App

Last Updated on May 3, 2021


The proposed solution will be an online mobile based application. This app will contain information regarding pre and post maternal session. The app will help a pregnant lady to know about pregnancy milestone and when to worry and when to not. According to this app, user needs to register by entering name, age, mobile number and preferred language. The app will be user friendly making it multi-lingual and audio-video guide to help people who have impaired hearing or sight keeping in mind women who reside in rural areas and one deprived of primary education. The app will encompass two sections pre-natal and post- natal.

           In case of emergency i.e. when the water breaks (indication) there will be a provision to send emergency message (notification) that will be sent to FCM (Firebase Cloud Messaging), it then at first tries to access the GPS settings in cell, in case the GPS isn’t on, Geolocation API will be used. Using Wi-Fi nodes that mobile device can detect, Internet, Google’s datasets, nearby towers, a precise location is generated and sent via Geocoding to FCM, that in turn generates push notifications, and the tokens will be sent to registered user’s, hospitals, nearby doctors, etc. and necessary actions will be implemented, so that timely            help will be provided

More Details: Smart Health Monitoring App

Submitted By


Last Updated on May 3, 2021


- Implement E-Commerce Web App which had started from 13 November to 12 December 2020.

- In this Web App user can able to purchase the various products which is available in Database and virtually placing the orders.

- Applied Python , DJANGO , Bootstrap and JavaScript ,especially focus on backend to explore the skill and knowledge of backend.

- This Web App consist of proper Database functionality which help to implement different function and operations.

- User can able to ask any query regarding products and processes , also there is special search functionality in which user can able to filter their required products by simply search on there.

- There are pop-down Cart which shows the product available in the Cart which is select by the user with two buttons in the bottom, one is Checkout and another is Clear Cart.

- On clicking the Checkout button it render the user to the place order page in which user should give all their details by filling the blanks input and finally place the Order.

- All the orders detail of user, orders and query will be stored in the Databases with their Username and Date.

- On clicking the Clear Cart button , it clear all the product which is select by the user for purchase.

More Details: E-Commerce

Submitted By

Human Computer Interaction Using Iris,Head And Eye Detection

Last Updated on May 3, 2021


HCI focus on interfaces between people and computers and how to design, evaluate, and implement interactive computer systems that satisfy the user. The human–computer interface can be described as the point of communication between the human user and the computer. The flow of information between the human and computer is defined as the loop of interaction. It deals with the design, execution and assessment of computer systems and related phenomenon that are for human use. HCI process is completed by applying a digital signal processing system which takes the analog input from the user by using dedicated hardware (Web Camera) with software.

Eye tracking is the process of measuring either the point of gaze(where one is looking)or the motion of the eye relative to the head. An eye tracker is a device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, in marketing, as an input device for human computer interaction, and in product design. There are a number of methods for measuring the eye movement. The most popular variant uses video images from which the eye positions are extracted. Eye movement are made using direct observations. It is observed that reading does not involve a smooth sweeping of the eyes along the text, as previously assumed, but a series of short stops(called fixations).All the records show conclusively that the character of the eye movement is either completely independent of or only very slightly dependent on the material of the picture and how it is made. The cyclical pattern in the examination of the picture is dependent not only on what is shown on the picture, but also on the problem facing the observer and the information that ones hopes to get from the picture. Eye movement reflects the human thought process; so the observers thought may be followed to some extent from records of eye movement. It is easy to determine from these records from which the elements attract the observers eye in what order, and how often.

We build a neural network here and there are two types of network:Feed-forward networkFeed back network

Using video-oculography, horizontal and vertical eye movements tend to be easy to characterize, because they can be directly deduced from the position of the pupil. Torsional movements, which are rotational movements about the line of sight, are rather more difficult to measure; they cannot be directly deduced from the pupil, since the pupil is normally almost round and thus rotationally invariant. One effective way to measure torsion is to add artificial markers (physical markers, corneal tattoos, scleral markings, etc.) to the eye and then track these markers. However, the invasive nature of this approach tends to rule it out for many applications. Non-invasive methods instead attempt to measure the rotation of the iris by tracking the movement of visible iris structures.


To measure a torsional movement of the iris, the image of the iris is typically transformed into polar co-ordinates about the center of the pupil; in this co-ordinate system, a rotation of the iris is visible as a simple translation of the polar image along the angle axis. Then, this translation is measured in one of three ways: visually, by using cross-correlation or template matching, or by tracking the movement of iris features. Methods based on visual inspection provide reliable estimates of the amount of torsion, but they are labour intensive and slow, especially when high accuracy is required. It can also be difficult to do visual matching when one of the pictures has an image of an eye in an eccentric gaze position.

If instead one uses a method based on cross-correlation or template matching, then the method will have difficulty coping with imperfect pupil tracking, eccentric gaze positions, changes in pupil size, and non-uniform lighting. There have been some attempts to deal with these difficulties but even after the corrections have been applied, there is no guarantee that accurate tracking can be maintained. Indeed, each of the corrections can bias the results.

The remaining approach, tracking features in the iris image, can also be problematic. Features can be marked manually, but this process is time intensive, operator dependent, and can be difficult when the image contrast is low. Alternatively, one can use small local features like edges and corners. However, such features can disappear or shift when the lighting and shadowing on the iris changes, for example, during an eye movement or a change in ambient lighting. This means that it is necessary to compensate for the lighting in the image before calculating the amount of movement of each local feature.

In our application of the Maximally Stable Volumes detector, we choose the third dimension to be time, not space, which means that we can identify two-dimensional features that persist in time. The resulting features are maximally stable in space (2-D) and time (1-D), which means that they are 3-D intensity troughs with steep edges. However, the method of Maximally Stable Volumes is rather memory intensive, meaning that it can only be used for a small number of frames (in our case, 130 frames) at a time. Thus, we divide up the original movie into shorter overlapping movie segments for the purpose of finding features. We use an overlap of four frames, since the features become unreliable at the ends of each sub-movie. We set the parameters of the Maximally Stable Volumes detector such that we find almost all possible features. Of these features, we only use those that are near to the detected pupil center (up to 6 mm away) and small (smaller than roughly 1% of the iris region). We remove features that are large in angular extent (the pupil and the edges of the eyelids), as well as features that are further from the pupil than the edges of the eyelids (eyelashes).

We have used to track the eye movement and convert to mouse direction using eucledian distance which would greatly help the disabled people.I have also implemented a virtual keyboard.

More Details: Human Computer Interaction using iris,head and eye detection

Submitted By


Last Updated on May 3, 2021


Problem Statement

Develop tools that would increase Productivity for students and teachers. In the past 10-15 years we have seen the transition of things around us from offline to online, whether it's business, entertainment activities, daily needs, and now even education. Productivity tools have been a success with businesses and firms. Develop productivity tools for students and teachers in any domain of your choice that can achieve the same success in the educational field in the future.

Problem Solution

In this post - covid era, the education sector has erupted, with a plethora of new opportunities. Scholastic provides a complete and comprehensive education portal for students as well as staff.

  • The USP of the application are lab sessions simulated using Augmented Reality.
  • Other features include usage of virtual assistants like Alexa to provide reminders, complete timetable and file integration
  • A blockchain based digital report card system where teachers can upload report cards for students & send it to parents.
  • Plagiarism checker for assignments. It is a one - stop solution to all needs such as announcements and circulars from institution or a staff member, fee payment and even a chatbot for additional support.

Tech Stack

  • Google Assistant For Chatbot
  • Via the Actions Console
  • Python3 for Plagiarism Checker
  • Gensim
  • NumPy
  • NLP Models ( Word Embedding)
  • Heroku (For Deployment & making API Calls)
  • Android Studio with Java For Main Android App
  • AR Foundation For Simulated Lab Sessions with Blender & Unity
  • Ethereum, Solidity & React.js For Blockchain Based Storage for Report Cards (Along with Ganache & Truffle Suite)

More Details: Scholastic

Submitted By

Regression Analysis On Wallmart Sales Data

Last Updated on May 3, 2021


One of the leading retail stores in the US, Walmart, would like to predict the sales and demand accurately. There are certain events and holidays which impact sales on each day. There are sales data available for 45 stores of Walmart. The business is facing a challenge due to unforeseen demands and runs out of stock some times, due to the inappropriate machine learning algorithm. An 

ideal ML algorithm will predict demand accurately and ingest factors like economic conditions including CPI, Unemployment Index, etc.

 Walmart runs several promotional markdown events throughout the year. These markdowns precede prominent holidays, the four largest of all, which are the Super Bowl, Labour Day, Thanksgiving, and Christmas. The weeks including these holidays are weighted five times higher in the evaluation than non-holiday weeks. Part of the challenge presented by this competition is modeling the effects of markdowns on these holiday weeks in the absence of complete/ideal historical data. Historical sales data for 45 Walmart stores located in different regions are available.

 Dataset Description

This is the historical data which covers sales from 2010-02-05 to 2012-11-01, in the file Walmart_Store_sales. Within this file you will find the following fields:

·        Store - the store number

·        Date - the week of sales

·        Weekly_Sales - sales for the given store

·        Holiday_Flag - whether the week is a special holiday week 1 – Holiday week 0 – Non-holiday week

·        Temperature - Temperature on the day of sale

·        Fuel_Price - Cost of fuel in the region

·        CPI – Prevailing consumer price index

·        Unemployment - Prevailing unemployment rate

 Holiday Events

Super Bowl: 12-Feb-10, 11-Feb-11, 10-Feb-12, 8-Feb-13

Labour Day: 10-Sep-10, 9-Sep-11, 7-Sep-12, 6-Sep-13

Thanksgiving: 26-Nov-10, 25-Nov-11, 23-Nov-12, 29-Nov-13

Christmas: 31-Dec-10, 30-Dec-11, 28-Dec-12, 27-Dec-13

 Analysis Tasks

Basic Statistics tasks

1.     Which store has maximum sales

2.     Which store has maximum standard deviation i.e., the sales vary a lot. Also, find out the coefficient of mean to standard deviation

3.     Which store/s has good quarterly growth rate in Q3’2012

4.     Some holidays have a negative impact on sales. Find out holidays which have higher sales than the mean sales in non-holiday season for all stores together

5.     Provide a monthly and semester view of sales in units and give insights

 Statistical Model

For Store 1 – Build prediction models to forecast demand

·        Linear Regression – Utilize variables like date and restructure dates as 1 for 5 Feb 2010 (starting from the earliest date in order). Hypothesize if CPI, unemployment, and fuel price have any impact on sales.

·        Change dates into days by creating new variable.

Select the model which gives best accuracy.

More Details: Regression Analysis on Wallmart Sales Data