Customer Management System

Last Updated on May 3, 2021

About

Introduction: The dataset contains of the complaints registered by the customers in a bank in difference processes. Here the main task is the automation of the complaint's management. When a customer registers a complaint, it will automatically provide them the ticket ID generated and which department is resolving this. Goal: Manage the customer- complaints

Technologies: The entire code has been developed using Python programming language and is hosted on Heroku. The website is developed using Flask and the data is stored in a SQL database Model used: Used Naive Bayes to study the complaints by the customers and classify them into different departments 



https://customer-mgment-system.herokuapp.com/


More Details: Customer Management System

Submitted By


Share with someone who needs it

Smart Fridge

Last Updated on May 3, 2021

About

A Smart Fridge that uses Computer Vision to log in food, keeps user updated by SMS, and provide recommendations.

Inspiration

We saw the brand new Samsung Family Hub smart fridge at the CES 2017, which require manual data log-in for the goods stored inside. We got inspired to create a smart fridge that can automatically log in what's inside the fridge, enable users to access the data remotely and have information recommended for the users based on what they have in the fridge.

What it does

This is an IoT-based smart fridge that uses Computer Vision to automatically log in food, informs the users through text messages of what's stored inside and expiration data, and recommend healthier and better use of user's’ current storage through features like checking nutrition and search for recipes related to some items.

How we built it

We used a button on an Arduino board to emulate the action of “closing the fridge door”. The signal created by the button is sent to a PC through a serial COM port. When PC receives that signal, the kinect camera is triggered to capture a photo of the current status in the fridge. The photo is then compressed and sent to our web server. Our web server is coded on Python+Flask and deployed on Google App Engine Flexible Environment. This web server also contains some logics for responding to Twilio messages, which will be mentioned later. When the web server receives that photo, it puts the photo in Google Cloud Storage. It also keeps some basic image metadata in Google Cloud Datastore database. Then the Google Cloud Vision API is called to analyze the photo and label it by what the item is and which category it belongs to. The labels (coming out of cloud vision api) are then passed to Google KnowledgeGraph API to be further narrowed down to things people would normally put in a fridge. The results coming out of Google KnowledgeGraph are then stored in Google Cloud Datastore database. Now the fridge basically identifies the items that were put in it by automatically capturing and analyzing photos. Every time new items are added to the fridge, Twilio would send a notification through SMS to inform user Users are also able to text Twilio some basic commands to:

  • Check what is currently in the fridge
  • Check which item is about to pass its expiration date
  • Check the nutrition of the food stored
  • Search for recipes related to some items

Challenges we ran into

1) Capture the kinect photo with the least noise and incorporated Arduino-based trigger for the photo

2) Integrate the local image capture, python web server, google cloud platform, and twilio together and make them work flawlessly. Specifically, the challenges include the following:

  • Image format conversion
  • Image compression and processing
  • Handling HTTP POST/GET requests between Local and web servers for images as well as web servers and twilio for sending and receiving texts
  • Create appropriate database structure to store images and item labels

3) At first, it was really hard to pick the right label from about 10 labels returned by cloud vision api. We used KnowledgeGraph first to narraw the list down to 3-5 labels, and then manually process them according to how “general” or “specific” they are.

4) There were some misleading parts in the documentation of cloud vision api in Python. The URI stated in the doc is not the correct format required by the actual function. We finally figured it out by looking into the C# version of that documentation.

Accomplishments that we're proud of

We finished it early enough to write this :p

What we learned

Learned so much about technical stuffs and non-technical stuffs along the way of development

What's next for Smart Fridge

Computer Vision System

  • Better recognition of photos containing multiple items of different categories
  • More accurate and systematic labeling of new items

Data log-in/Request methods

  • Use speech recognition to log in data, complementary to Computer Vision
  • A smarter twilio assistant capable of natural language processing

Data Utilization Features

  • Automatically refill necessity through Google Express

More Details: Smart Fridge

Submitted By


Disease Prediction System

Last Updated on May 3, 2021

About

ML-Model-Flask-Deployment

This is a demo project to elaborate how Machine Learn Models are deployed on production using Flask API


Prerequisites

You must have Scikit Learn, Pandas (for Machine Leraning Model) and Flask (for API) installed.


Project Structure

This project has four major parts :

  1. model.py - This contains code fot our Machine Learning model to predict employee salaries absed on trainign data in 'hiring.csv' file.
  2. app.py - This contains Flask APIs that receives employee details through GUI or API calls, computes the precited value based on our model and returns it.
  3. request.py - This uses requests module to call APIs already defined in app.py and dispalys the returned value.
  4. templates - This folder contains the HTML template to allow user to enter employee detail and displays the predicted employee salary.


Running the project

  1. Ensure that you are in the project home directory. Create the machine learning model by running below command -
python model.py

This would create a serialized version of our model into a file model.pkl

  1. Run app.py using below command to start Flask API
python app.py

By default, flask will run on port 5000.

  1. Navigate to URL http://localhost:5000

You should be able to view the homepage as below : 

Enter valid numerical values in all 3 input boxes and hit Predict.

If everything goes well, you should be able to see the predcited salary vaule on the HTML page! 

  1. You can also send direct POST requests to FLask API using Python's inbuilt request module Run the beow command to send the request with some pre-popuated values -
python request.py


More Details: Disease Prediction System

Submitted By


Social Distance Monitoring System(Python, Deep Learning And Opencv)(Research Paper)

Last Updated on May 3, 2021

About

Social distancing is one of the community mitigation measures that may be recommended during Covid-19 pandemics. Social distancing can reduce virus transmission by increasing physical distance or reducing frequency of congregation in socially dense community settings, such as ATM,Airport Or market place .

Covid-19 pandemics have demonstrated that we cannot expect to contain geographically the next influenza pandemic in the location it emerges, nor can we expect to prevent international spread of infection for more than a short period. Vaccines are not expected to be available during the early stage of the next pandemic (1), a Therefore, we came up with this system to limit the spread of COVID via ensuring social distancing among people. It will use cctv camera feed to identify social distancing violations

We are first going to apply object detection using a YOLOv3 model trained on a coco dataset that has 80 classes. YOLO uses darknet frameworks to process incoming feed frame by frame. It returns the detections with their IDs, centroids, corner coordinates and the confidences in the form of multidimensional ndarrays. We receive that information and remove the IDs that are not a “person”. We will draw bounding boxes to highlight the detections in frames. Then we use centroids to calculate the euclidean distance between people in pixels. Then we will check if the distance between two centroids is less than the configured value then the system will throw an alert with a beeping sound and will turn the bounding boxes of violators to red.

Research paper link: https://ieeexplore.ieee.org/document/9410745

More Details: Social Distance Monitoring System(Python, Deep Learning And OpenCV)(Research paper)

Submitted By


Sense+

Last Updated on May 3, 2021

About

Sense+ makes the approach to helping those in need proactive compared to the traditional reactive approach. It utilises speech, facial recognition and other technologies to infer emotions of users.

Inspiration

The global pandemic has revealed the growing issue and importance of mental health, in particular one’s accessibility to mental health services and the detection of someone suffering from stress, anxiety or other mental health conditions.

We personally have seen that being mentally well allows us ability to work and study productively.

It is the on going issue of those mentally unwell not approaching anyone due to societal stigma of seeking treatment that worries us.

Our project/proof of concept aims to make the change the approach of helping those in need proactive, rather than waiting for individuals to come forward by themselves, all whilst aiding to reducing the stigma associated with suffering from mental health issues

What it does

Our program integrates voice and facial recognition to detect/infer an individual’s emotions.

The voice using sentiment analysis to detect keywords from an audio transcript. These keywords are categorised as neutral, positive or negative. Natural language processing and regular expressions are utilised to break down audio transcripts into multiple sentences/segments.

The facial recognition uses convolutional neural networks to pick up features of ones faces, to identify emotions. Videos broken down into multiple frames which are fed into neutral network to make the predication.

This model is trained and validated using Facial Expression Recognition data from Kaggle (2013).

As of now we have nearly turned the above concept into an app which allows users to upload multiple videos, which are then analysed and results/predictions are returned about the emotional state of an individual.

The implications of this is that it can aid in indicating whether the user should seek professional help, or at the very least make them possibly aware of their current mental state.

How we built it

The frontend was developed using Java (Android Studio), whilst our backend was developed in Python, with the help of python packages such as TensorFlow, Keras and speech recognition. The frontend and backend communicate through Amazon AWS platform. AWS lambda is utilised so our code can be ran serverless and asynchronously. S3 is employed as a bucket to upload videos from the frontend so the backend process them. Additionally, output from the backend is stored as JSON in S3 so the frontend can retrieve for display purposes.

Challenges we ran into

The main challenge we faced was learning how to make our frontend and backend communicate. With the help of mentors, from Telstra, Atlassian and Australia Post they provided us insights into solving our main issue. Though we did not quite get everything integrate into a single working piece of software.

Learning aspects of AWS was also challenging considering no one on our team had any prior experience.

On top of that applying TensorFlow and Keras in a full project context was challenging in terms of the lack of resources (hardware) and training data was a timely process.

Accomplishments that we're proud of

Despite not completing a functioning prototype at this point in time, we are proud that we delved into new software, tools and packages that we never had prior experience with and tried our best to utilise them. Finally, we are proud of how we conducted ourselves as a team, given the diverse nature and range and variation of skills and knowledge.

What we learned

First of all, the importance of communicating as a team is crucial. Main points include team ideation, being critical and delegating appropriately according to each team members strengths. Another point is learning to approach mentors or team members when you are struggling. Overcoming the stigma or anxiety of admitting being ‘lost’ is important lesson, and we found when we overcame these barriers, we were able to progress.

What's next for Sense+

At the moment the Sense+ remains at its core an idea, not necessarily a piece of deliverable software. In the future we seek to improve upon accuracy when analysing and detecting emotion. This includes but isn’t limited to; more sophisticated sentiment analysis, improving the modelling and taking advantage of other bio-metrics that may come with the advanced of technology such as detecting heartbeat etc.

In terms of reach and usage, possibly uses is that companies could employ such software to monitor the well-being of employees. In the future the software could be more passive so that individuals can be monitored (of course with consent and confidential) in a more natural manner. This would yield accurate information on employee well-being rather than self-reports where people may lie because of stigma and fear. This could greatly boost the overall productivity and mental well-being within the company.

Other sectors this could be applied in is hospitals and education.

More Details: Sense+

Submitted By


Age And Gender Detection

Last Updated on May 3, 2021

About

objective :To build a gender and age detector that can approximately guess the gender and age of the person (face) in a picture or through webcam.

Description : In this Python Project, I had used Deep Learning to accurately identify the gender and age of a person from a single image of a face. I used the models trained by Tal hassner and Gil levi. The predicted gender may be one of ‘Male’ and ‘Female’, and the predicted age may be one of the following ranges- (0 – 2), (4 – 6), (8 – 12), (15 – 20), (25 – 32), (38 – 43), (48 – 53), (60 – 100) (8 nodes in the final softmax layer). It is very difficult to accurately guess an exact age from a single image because of factors like makeup, lighting, obstructions, and facial expressions. And so, I made this a classification problem instead of making it one of regression.

For this python project, I had used the Adience dataset; the dataset is available in the public domain. This dataset serves as a benchmark for face photos and is inclusive of various real-world imaging conditions like noise, lighting, pose, and appearance. The images have been collected from Flickr albums and distributed under the Creative Commons (CC) license. It has a total of 26,580 photos of 2,284 subjects in eight age ranges (as mentioned above) and is about 1GB in size. The models I used had been trained on this dataset.

Working : Open your Command Prompt or Terminal and change directory to the folder where all the files are present.

  • Detecting Gender and Age of face in Image Use Command :

python detect.py --image image_name

  • Detecting Gender and Age of face through webcam Use Command :

python detect.py

More Details: Age and Gender Detection

Submitted By