Analysis Of Indian Stock Market Using Black-Scholes Formula

Last Updated on May 3, 2021

About

Deep statistical analysis of the Premiums of Options in Indian Stock Market with respect to their predicted values using the Black-Scholes Formula.

More Details: Analysis of Indian Stock Market using Black-Scholes Formula

Submitted By


Share with someone who needs it

Enterprise Ai

Last Updated on May 3, 2021

About

Enterprise AI is about enhancing the customer satisfaction index and to ensure customer stickiness to your organization. By infusing emerging technologies like artificial intelligence to engage and retain the set of customers. Using AI algorithm, we should address the use case. The business operation

processes like determining the customer sentiments from various different media like - Social media, Audio Calls, Video Calls, Images, Emails & Chats, interact with customers to provide quick and effortless

solutions, analyze and learn from buying behaviour to generate next best offer, ascertain customer retention and ensure lesser churn, derive AI-based Customer Segmentation, manage customer

touchpoints, evaluate customer feedback and engage with the customers. We provide a membership card to all the customers who purchase stocks in the store. By scanning the QR code the customer can fill the

feedback. Through the user can easily complete the feedback (Bad, Good, Very good) after purchasing. We are providing three categories (Bronze, Gold and Platinum) for our customers to categorize their

purchasing list to calculate the purchasing efficiency based on their quality ,they purchase. The customer who gives feedback as very good, they come under platinum category, best offers are provided to

them (free purchase for Rs.1000). Notifications will be sent to customers through the messages about the new products available along with its price. Best offers are also provided on festival occasions. We classify the feedback using classification algorithms like random forest to get the positive and negative feedbacks.

Negative feedback will be collected and rectified soon. Through this approach, the shopkeeper is able to get clear feedback about his shop easily.



More Details: Enterprise AI

Human Computer Interaction Using Iris,Head And Eye Detection

Last Updated on May 3, 2021

About

HCI focus on interfaces between people and computers and how to design, evaluate, and implement interactive computer systems that satisfy the user. The human–computer interface can be described as the point of communication between the human user and the computer. The flow of information between the human and computer is defined as the loop of interaction. It deals with the design, execution and assessment of computer systems and related phenomenon that are for human use. HCI process is completed by applying a digital signal processing system which takes the analog input from the user by using dedicated hardware (Web Camera) with software.

Eye tracking is the process of measuring either the point of gaze(where one is looking)or the motion of the eye relative to the head. An eye tracker is a device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, in marketing, as an input device for human computer interaction, and in product design. There are a number of methods for measuring the eye movement. The most popular variant uses video images from which the eye positions are extracted. Eye movement are made using direct observations. It is observed that reading does not involve a smooth sweeping of the eyes along the text, as previously assumed, but a series of short stops(called fixations).All the records show conclusively that the character of the eye movement is either completely independent of or only very slightly dependent on the material of the picture and how it is made. The cyclical pattern in the examination of the picture is dependent not only on what is shown on the picture, but also on the problem facing the observer and the information that ones hopes to get from the picture. Eye movement reflects the human thought process; so the observers thought may be followed to some extent from records of eye movement. It is easy to determine from these records from which the elements attract the observers eye in what order, and how often.

We build a neural network here and there are two types of network:Feed-forward networkFeed back network


Using video-oculography, horizontal and vertical eye movements tend to be easy to characterize, because they can be directly deduced from the position of the pupil. Torsional movements, which are rotational movements about the line of sight, are rather more difficult to measure; they cannot be directly deduced from the pupil, since the pupil is normally almost round and thus rotationally invariant. One effective way to measure torsion is to add artificial markers (physical markers, corneal tattoos, scleral markings, etc.) to the eye and then track these markers. However, the invasive nature of this approach tends to rule it out for many applications. Non-invasive methods instead attempt to measure the rotation of the iris by tracking the movement of visible iris structures.

Methodology

To measure a torsional movement of the iris, the image of the iris is typically transformed into polar co-ordinates about the center of the pupil; in this co-ordinate system, a rotation of the iris is visible as a simple translation of the polar image along the angle axis. Then, this translation is measured in one of three ways: visually, by using cross-correlation or template matching, or by tracking the movement of iris features. Methods based on visual inspection provide reliable estimates of the amount of torsion, but they are labour intensive and slow, especially when high accuracy is required. It can also be difficult to do visual matching when one of the pictures has an image of an eye in an eccentric gaze position.

If instead one uses a method based on cross-correlation or template matching, then the method will have difficulty coping with imperfect pupil tracking, eccentric gaze positions, changes in pupil size, and non-uniform lighting. There have been some attempts to deal with these difficulties but even after the corrections have been applied, there is no guarantee that accurate tracking can be maintained. Indeed, each of the corrections can bias the results.

The remaining approach, tracking features in the iris image, can also be problematic. Features can be marked manually, but this process is time intensive, operator dependent, and can be difficult when the image contrast is low. Alternatively, one can use small local features like edges and corners. However, such features can disappear or shift when the lighting and shadowing on the iris changes, for example, during an eye movement or a change in ambient lighting. This means that it is necessary to compensate for the lighting in the image before calculating the amount of movement of each local feature.

In our application of the Maximally Stable Volumes detector, we choose the third dimension to be time, not space, which means that we can identify two-dimensional features that persist in time. The resulting features are maximally stable in space (2-D) and time (1-D), which means that they are 3-D intensity troughs with steep edges. However, the method of Maximally Stable Volumes is rather memory intensive, meaning that it can only be used for a small number of frames (in our case, 130 frames) at a time. Thus, we divide up the original movie into shorter overlapping movie segments for the purpose of finding features. We use an overlap of four frames, since the features become unreliable at the ends of each sub-movie. We set the parameters of the Maximally Stable Volumes detector such that we find almost all possible features. Of these features, we only use those that are near to the detected pupil center (up to 6 mm away) and small (smaller than roughly 1% of the iris region). We remove features that are large in angular extent (the pupil and the edges of the eyelids), as well as features that are further from the pupil than the edges of the eyelids (eyelashes).

We have used to track the eye movement and convert to mouse direction using eucledian distance which would greatly help the disabled people.I have also implemented a virtual keyboard.



More Details: Human Computer Interaction using iris,head and eye detection

Submitted By


My Rewards - Alexa Skill

Last Updated on May 3, 2021

About

An Alexa skill to reward kids for good behavior.

Inspiration

After building http://eFamilyBoard.com I decided to purchase an Alexa Show (2nd gen) for comparison. eFamilyBoard has a few nicer features but over all the Alexa is much more powerful and scalable. One heavily used feature was the sticker board on eFamilyBoard that didn't have a comparable skill on Alexa. As a result, I decided to build My Rewards skill to replace it.

What it does

It allows families and teachers to reward kids for good behavior. The user ultimately decides what to do with the rewards. Personally, our kids earn $5 after they've earned 10 total rewards, then they start over. The user can add recipients and give multiple rewards at a time. For example, "Alexa give John 5 stickers" or "Alexa take away 2 stickers from John". And if you don't know what reward type of reward to give or take away you can always simply say "rewards" in the place of the type of reward (e.g. football, sticker, heart, unicorn, truck, cookie, doughnut, etc).

How I built it

I built it with the ASK CLI and Visual Studio Code. I started with the sample hello world app and refactored it to utilize typescript, express, and ngrok to run locally. I also used mocha with chai for unit tests that run and must pass before I can deploy to AWS.

Challenges I ran into

I learned to get stated by taking a course on Udemy but they didn't use typescript and deployed to AWS for every change. That would take FOREVER to debug and build efficiently. I setup a simple express app and use ngrok to route calls to my local machine. This allows me to talk to my Alexa and debug by stepping through the code in VS Code.

Accomplishments that I'm proud of

Project setup, local debugging with typescript, and tests with 90%+ code coverage. Not only does it work for voice but it also supports display templates to show the user what rewards each participant has earned. I was going to add ISP down the road but decided to do it from the start and it ended up being easier than expected. For being my first Alexa app I think the app works extremely well and my kids started to utilize it with no learning curve.

What I learned

Being my first Alexa app I learned a TON. From how to simply use Alexa (still learning tricks) to how to interact with voice commands. I've also never used DynamoDB but the Alexa SDK made that super easy.

What's next for My Rewards

Add support for more languages. My family has been using it for development but exited to see what others think of it.

More Details: My Rewards - Alexa Skill

Submitted By


Automated Generation Of Videos From News Stories

Last Updated on May 3, 2021

About

Recent advancements in internet, media capturing, and mobile technologies have let fast growing News industries to produce and publish News stories rapidly. In recent days News industry is trying lot to make their news stories attractive and more engaging to their readers. Youngsters these days often do not have much time to go through an entire news article to understand the content yet they want to know all the important elements the article. Recent surveys suggest that Millennials and other similar age group of people prefer news stories as videos over news as text. However manual generation of videos for each news article is considered costly and laborious. Hence there is a requirement for news video generation system that can create interesting, engaging, concise and high-quality news videos from text news stories with little or no human intervention.

This research will develop an end-to-end automated solution for generating videos from news articles. The system will have different NLP based components for automated news content analysis. Detection of key phrases from the news article will be done using NLP based or Deep learning solutions. Named entities in a news story such as person, time, place, brand etc can be automatically detected using NER for highlighting them in videos. Detection of emotions in news text or phrases for automated suggestion of background music or emojis for video production. In addition, famous tweets related to the news covered by the article can be detected and included in the final video. Also images and videos related to news content should be automatically discovered by crawling from internet and can be instantly used as background scenery in the video. This effort will also consider the analysis of the aforementioned steps in a faster manner for real-time video production.

More Details: Automated Generation of Videos from News Stories

Submitted By


Navassist Ai

Last Updated on May 3, 2021

About

Incorporating machine learning, and haptic feedback, NavAssistAI detects the position and state of a crosswalk light, which enables it to aid the visually impaired in daily navigation.

Inspiration

One day, we were perusing youtube looking for an idea for our school's science fair. On that day, we came across a blind YouTuber named Tommy Edison. He had uploaded a video of himself attempting to cross a busy intersection on his own. It was apparent that he was having difficulty, and at one point he almost ran into a street sign. After seeing his video, we decided that we wanted to leverage new technology to help people like Tommy in daily navigation, so we created NavAssist AI.

What it does

In essence, NavAssist AI uses object detection to detect both the position and state of a crosswalk light (stop hand or walking person). It then processes this information and relays it to the user through haptic feedback in the form of vibration motors inside a headband. This allows the user to understand whether it is safe to cross the street or not, and in which direction they should face when crossing.

How we built it

We started out by gathering our own dataset of 200+ images of crosswalk lights because there was no existing library for those images. We then ran through many iterations on many different models, training each model on this data set. Through the different model architectures and iterations, we strove to find a balance between accuracy and speed. We eventually discovered an SSDLite Mobilenet model from the TensorFlow model zoo had the balance we required. Using transfer learning and many iterations we trained a model that finally worked. We implemented it onto a raspberry pi with a camera, soldered on a power button and vibration motors, and custom-designed a 3D printed case with room for a battery. This made our prototype wearable device.

Challenges we ran into

When we started this project, we knew nothing about machine learning or TensorFlow and had to start from scratch. However, with some googling and trying stuff out, we were able to figure out how to implement TensorFlow for our project with relative ease. Another challenge was collecting, preparing, and labelling our data set of 200+ images. Although, our most important challenge was not knowing what it's like to be visually impaired. To overcome this, we had to go out to people in the blind community and talk to them so that we could properly understand the problem and create a good solution.

Accomplishments that we're proud of

  • Making our first working model that could tell the difference between stop and go
  • Getting the haptic feedback implementation to work with the Raspberry Pi
  • When we first tested the device and successfully crossed the street
  • When we presented our work at TensorFlow World 2019

All of these milestones made us very proud because we are progressing towards something that could really help people in the world.

What we learned

Throughout the development of this project, we learned so much. Going into it, we had no idea what we were doing. Along the way, we learned about neural networks, machine learning, computer vision, as well as practical skills such as soldering and 3D CAD. Most of all, we learned that through perseverance and determination, you can make progress towards helping to solve problems in the world, even if you don't initially think you have the resources or knowledge.

What's next for NavAssistAI

We hope to expand its ability for detecting objects. For example, we would like to add detection for things such as obstacles so that it may aid in more than just crossing the street. We are also working to make the wearable device smaller and more portable, as our first prototype can be somewhat burdensome. In the future, we hope to eventually reach a point where it will be marketable, and we can start helping people everywhere.

More Details: NavAssist AI

Submitted By