Next Word Predictor (2020)Last Updated on May 3, 2021
Aim: The objective of this project is to create text prediction model that predicts words using Natural Language Processing model (NLP) on a Deep Learning framework.
About The Model: Language prediction is a Natural Language Processing - NLP application concerned with predicting the text given in the preceding text. Auto-complete or suggested responses are popular types of language prediction. The first step towards language prediction is the selection of a language model. There are generally two models you can use to develop Next Word Suggestor/Predictor: 1) N-grams model or 2) Long Short-Term Memory (LSTM). We will use LSTM as there are limitations with N-grams approach one of them being that we get suggestions based only on the frequency in N-grams approach hence, there are many scenarios where this approach could fail.
Dataset: We will use the text from the book Metamorphosis by Franz Kafka.
Share with someone who needs it
Voice Of The DayLast Updated on May 3, 2021
The format used to work well on the radio, so we wanted to recreate those memories on Alexa.
What it does
Players can listen to up to three voice clips of well-known people and/or celebrities talking every day, as well as view a blurred image of the celebrity. After each clip, Alexa will ask you who you think is talking, and you must try to answer correctly. This will earn you a score for the monthly and all-time leader boards. The player can ask Alexa for hints, or to skip that voice clip to move onto the next one. Users scores are awarded depending on how many incorrect answers they gave for that voice and whether they used a hint or not. Users can also ask to hear yesterday’s answers, in case they couldn’t get the answers on that day.
How I built it
To create the structure of the skill, we used the Alexa Skills Kit CLI.
We used Amazons S3 storage for all our in-game assets such as the audio clips and images.
We used the Alexa Presentation Language to create the visual interface for the skill.
We used the Amazon GameOn SDK to create monthly and all-time leader boards for all users to enter without any sign up.
Every day, free users will be given the ‘easy’ clip to answer. The set of clips each day will be selected dependant on the day of the year. Users who have purchased premium gain access to the ‘medium’ and ‘hard’ clips every day, as well as being able to ask for hints for the voices, skip a voice if they are stuck, and enter scores onto the leader boards.
Accomplishments that I’m proud of
As well as creating a high-quality voice-only experience, we developed the skill to be very APL focused, which we are very proud of. The visual assets we used for the project were very high quality and we were able to dynamically display the appropriate data for each screen within the skill. The content loaded depends on who is talking, as well as the difficulty of the voice that the user is answering. APL also allowed us to blur and unblur the celebrity images, as opposed to having separate blurred and unblurred images for each person.
We were also very pleased with how we implemented the GameOn SDK SDK into the skill. When the user submits a score, they have a random avatar created for them, and their score will be submitted under this avatar. This avoids any sign up to use the leader boards, allowing all users to use it easily.
GameOn SDK also allows us to create monthly and all-time competitions/leader boards which all users are automatically entered.
What I learned
I have learnt how to develop APL as well as better practices for structuring it more efficiently. For example, there are many APL views in the project, all of which are almost identical, what I have learnt that would be more effective in future projects would be to condense these down into one primary view that I would use for each screen and just use the appropriate data.
I have also been able to hone prompts to the user for upsells and showing the leader boards. Testing has shown that constant prompts on each play for these things can become tedious to the user, and so we have reduced the frequency of these for a much better user experience.
Predicting Employees Under Stress For Pre-Emptive Remediation Using Machine Learning AlgorithmLast Updated on May 3, 2021
With the ongoing COVID-19 pandemic, businesses and organizations have acclimated to unconventional and different working ways and patterns, like working from home, working with limited employees at office premises. With the new normal here to stay for the recent future, employees have also adapted to different working environments and customs, which has also resulted in psychological stress and lethargy for many, as they adapt to the new normal and adjust their personal and professional lives. In this work, data visualization techniques and machine learning algorithms have been used to predict employees stress levels. Based on data, we can develop a model that will assist to predict if an employee is likely to be under stress or not. Here, the XGB classifier is used for the prediction process and the results are presented showing that the method facilitates getting a more reliable model performance. After performing interpretation utilizing XGB classifier it is determined that working hours, workload, age, and, role ambiguity have a significant and negative influence on employee performance. The additional factors do not hold much significance when associated to the above discussed. Therefore, It is concluded that concluded that increasing working hours, role ambiguity, the workload would diminish employee representation in all perspectives.
Link for paper: https://ieeexplore.ieee.org/document/9315726?denied=
HacktubeLast Updated on May 3, 2021
A Chrome extension that fights online harassment by filtering out comments with strong language.
YouTube is a place for millions of people to share their voices and engage with their communities. Unfortunately, the YouTube comments section is notorious for enabling anonymous users to post hateful and derogatory messages with the click of a button. These messages are purely meant to cause anger and depression without ever providing any constructive criticism. For YouTubers, this means seeing the degrading and mentally-harmful comments on their content, and for the YouTube community, this means reading negative and offensive comments on their favorite videos. As young adults who consume this online content, we feel as though it is necessary to have a tool that combats these comments to make YouTube a safer place.
What it does
HackTube automatically analyzes every YouTube video you watch, targeting comments which are degrading and offensive. It is constantly checking the page for hateful comments, so if the user loads more comments, the extension will pick those up. It then blocks comments which it deems damaging to the user, listing the total number of blocked comments at the top of the page. This process is all based on user preference, since the user chooses which types of comments (sexist, racist, homophobic, etc) they do not want to see. It is important to note that the user can disable the effects of the extension at any time. HackTube is not meant to censor constructive criticism; rather, it combats comments which are purely malicious in intent.
How we built it
Challenges we ran into
Accomplishments that we're proud of
We are proud of making a functional product that can not only fight online harassment and cyberbullying but also appeal to a wide variety of people.
What we learned
We learned how to dynamically alter the source code of a webpage through a Chrome extension. We also learned just how many YouTube comments are full of hate and malicious intent.
What's next for HackTube
Right now, for demo purposes, HackTube merely changes the hateful comments into a red warning statement. In the future, HackTube will have an option to fully take out the malicious comment, so users’ YouTube comments feed will be free of any trace of hateful comments. Users won’t have to worry about how many comments were flagged and what they contained. Additionally, we will have a way for users to input their own words that offend them and take the comments that contain those words out of the section.
Navassist AiLast Updated on May 3, 2021
Incorporating machine learning, and haptic feedback, NavAssistAI detects the position and state of a crosswalk light, which enables it to aid the visually impaired in daily navigation.
One day, we were perusing youtube looking for an idea for our school's science fair. On that day, we came across a blind YouTuber named Tommy Edison. He had uploaded a video of himself attempting to cross a busy intersection on his own. It was apparent that he was having difficulty, and at one point he almost ran into a street sign. After seeing his video, we decided that we wanted to leverage new technology to help people like Tommy in daily navigation, so we created NavAssist AI.
What it does
In essence, NavAssist AI uses object detection to detect both the position and state of a crosswalk light (stop hand or walking person). It then processes this information and relays it to the user through haptic feedback in the form of vibration motors inside a headband. This allows the user to understand whether it is safe to cross the street or not, and in which direction they should face when crossing.
How we built it
We started out by gathering our own dataset of 200+ images of crosswalk lights because there was no existing library for those images. We then ran through many iterations on many different models, training each model on this data set. Through the different model architectures and iterations, we strove to find a balance between accuracy and speed. We eventually discovered an SSDLite Mobilenet model from the TensorFlow model zoo had the balance we required. Using transfer learning and many iterations we trained a model that finally worked. We implemented it onto a raspberry pi with a camera, soldered on a power button and vibration motors, and custom-designed a 3D printed case with room for a battery. This made our prototype wearable device.
Challenges we ran into
When we started this project, we knew nothing about machine learning or TensorFlow and had to start from scratch. However, with some googling and trying stuff out, we were able to figure out how to implement TensorFlow for our project with relative ease. Another challenge was collecting, preparing, and labelling our data set of 200+ images. Although, our most important challenge was not knowing what it's like to be visually impaired. To overcome this, we had to go out to people in the blind community and talk to them so that we could properly understand the problem and create a good solution.
Accomplishments that we're proud of
- Making our first working model that could tell the difference between stop and go
- Getting the haptic feedback implementation to work with the Raspberry Pi
- When we first tested the device and successfully crossed the street
- When we presented our work at TensorFlow World 2019
All of these milestones made us very proud because we are progressing towards something that could really help people in the world.
What we learned
Throughout the development of this project, we learned so much. Going into it, we had no idea what we were doing. Along the way, we learned about neural networks, machine learning, computer vision, as well as practical skills such as soldering and 3D CAD. Most of all, we learned that through perseverance and determination, you can make progress towards helping to solve problems in the world, even if you don't initially think you have the resources or knowledge.
What's next for NavAssistAI
We hope to expand its ability for detecting objects. For example, we would like to add detection for things such as obstacles so that it may aid in more than just crossing the street. We are also working to make the wearable device smaller and more portable, as our first prototype can be somewhat burdensome. In the future, we hope to eventually reach a point where it will be marketable, and we can start helping people everywhere.
Research Paper On Face Detection Using Haar Cascade ClassifierLast Updated on May 3, 2021
In the last several years, face detection has been listed as one of the most engaging fields in research. Face detection algorithms are used for the detection of frontal human faces. Face detection finds use in many applications such as face tracking, face analysis, and face recognition. In this paper, we are going to discuss face detection using a haar cascade classifier and OpenCV. In this study, we would be focusing on some of the face detection technology in use.
In this study, we covered and studied in detail about face detection technique using haar cascades classifier and OpenCV to get the desired output. Using the OpenCV library, the haar cascade classifier was able to perform successful face detection with high accuracy and efficiency. We also used the OpenCV package to extract some of the features of the face to compare them. Also, we discussed some popular face detection methods. Further, we discussed the scope of face detection in the future and some of its applications. At last, we conclude that the future of facial detection technology is bright Security and surveillance is the major segments that will be deeply influenced. Other areas that are now welcoming it are private industries, public buildings, and schools