Sales-Prediction

Last Updated on May 3, 2021

About

The aim is to build a model which predicts sales based on the money spent on different platforms such as TV, radio, and newspaper for marketing by using simple linear regression and multiple linear regression.

More Details: Sales-Prediction

Submitted By


Share with someone who needs it

Helping The People Who Are In Need

Last Updated on May 8, 2021

About

Our Inspiration

"Anybody can help but the help at the right time brings the big difference. Anybody can help everybody but helping the needy brings the big difference".

As rightly quoted above, helping the needy at right time brings the huge impact on the society. On this pandemic situation we come across lot of people in a risk of losing their life though staying inside the home .The reason behind this is,not COVID-19 but POVERTY. There also exist people who is passionate to serve the society by providing the people needs .So we,the TEAM CRUTCH has decided to serve the society through our tech minds by designing the app called CRUTCH which act as bridge between the donor & recipient .

This application is built on top of PEGA platform version 8.4

Challenges

The challenging task for us is making the live request ,choosing the most needful services,handling the services between donor & recipient .But as a team we were able to conquer the challenges in to fruitful product.

CRUTCH acts as a platform of sharing assets to needy people at right time from donors.

Learning Experience

We had a great learning experience .We had a chance of experiencing activities,customizing harness & portal. .

Uniqueness of Our Application CRUTCH

Crutch provides the customer 6 services ,where the customer can chose any service through his/her interest.

Bringing the Need at your doorstep & right time.

Apart from donor & requester crutch has an additional feature of volunteer for people who is passionate in serving the society to assist donor & recipient in bringing their need at their doorstep a right time. Serve the society-tops the leader board. Crutch has an unique feature of maintaining leader board for volunteers on the basis of one who serves the best leads as top performer in leader board.

About crutch

Crutch is a platform that extends help for the needy people at right time.The motive of crutch is to concatenate the demand & supply . The services available in this app includes Food Service,Education Service ,Cloth Service ,Blood donate/Receive (share blood ) service ,Organ donate /receive (share organ) service,Money (Crutch pay )service .

How Crutch Works

Customer can register as donor/recipient initially .while logging in the next time he has the option to change his role type if he wills. After logging in he can chose any of six exclusive services .If the customer acts as recipient he can request his/her need. The intimation about the request will be sent to all registered donors if any of the donor accepts to donate, then crutch provides them an option to chose Volunteer assistance .if he chooses volunteer service the crutch volunteer will help in delivering the requirement from donor to recipient at right time. Thereby crutch feed the needy people at right time. Crutch has verification team to ensure that details submitted by recipient doesn't contain any false information.

Crutch Functionality

The crutch has 6 services apart from registration & Volunteer Survey & Feed the Need.The following is the detailed description of all case type.

1)Registration: The customer can sign in in to crutch application via registration case type. He has to specify his role type(donor/recipient) & service type(6 services) based on which the work group & workbaskets are assigned for them.They can change role type in future as well.After providing the preliminary details such as personal info,Organisation info(if he/she representing any org) & address details.After the approval by the admin crutch the access credentials are created for the customer.The case will not move to approval unless he accept terms & conditions of crutch . Innovation: We have implemented completion status- Progress bar for the customer to track his registration completion status

2)Feed The Need: The purpose of this case type is to help the needy people in choosing the service according to their need. Six exclusive services are shown to customer on their next login where he/she can choose one and proceed with flow.

Innovation: Six attractive icons are configured where the user clicks the Icon the corresponding case type begins.

3)Crutch Pay: The purpose of this case type is to donate/receive money .The recipient begins the case where he views all his details in editable format .They can edit it if necessary.The customer can raise money request for his own self or for his friend/relatives.In case of friend/relative .He has to provide their details.There are few predefined reasons present for requesting money such as requesting fund for orphanage/home, requesting fund for natural calamity,,requesting for illness/Accident .Customer can also provide his own reason of requesting by choosing others.He has to provide all the required info along with necessary documents.He will also specify his preferred timing for help.After his submission,there happens background verification process by crutch admin where all details are checked. The case proceeds forward only if details are true. Then on the successful completion of verification the intimation will be sent to all donors if any donors accepts he can confirm his own details and move for payment. Method of Payment can also be Negotiated by donor if he is comfortable with.After receiving the money the recipient fills the fullfilment and provide feedback.

Innovation:Modal dialog -Pop up for thanking the customer

4)Food Service: The donor /Recipient can begin the case by providing the Food details.If recipient begins the case it moves to donor for viewing the recipient details and if he accepts the request then it moves to gather delivery information.There the recipient has an option of requesting any volunteer service then the case routes to volunteer workbasket if any of the volunteer picks the work, then he confirms the donor& recipient address details and finally he delivers to them. If the donor begins the case he provides the details of donating food,the case routes to recipient workbasket . The recipient has an option of selecting self or volunteer service .If he goes for volunteer service ,then it routes to volunteer workbasket.if any of the volunteer picks the work, then he confirms the donor& recipient address details and finally he delivers to them.

5)Cloth Service: The donor /Recipient can begin the case by providing the Food details.If recipient begins the case it moves to donor for viewing the recipient details and if he accepts the request then it moves to gather delivery information.There the recipient has an option of requesting any volunteer service then the case routes to volunteer workbasket if any of the volunteer picks the work, then he confirms the donor& recipient address details and finally he delivers to them.If the donor begins the case he provides the details of donating cloth.the case routes to recipient workbasket .The recipient has an option of selecting self or volunteer service .If he goes for volunteer service ,then it routes to volunteer workbasket.if any of the volunteer picks the work, then he confirms the donor& recipient address details and finally he delivers to them.

6)Education Service: The recipient begins the case where he has the option to acts as referencer or self based on type of request he raise. If he is raising for him he can provide as self, if not he can act as referencer. He gives all his preliminary details & attach Necessary Documents.Then the case move forwards to background verification Process. Unless the background verification successfully completes the case does not move forward .On Successful completion ,it routes to donor workbasket.If any of the donor accept to provide contact details are shared with both.Then fulfillment details are received from recipient after receiving the help.

7)Share Blood: This case type can be accessed by donor/Recipient.If donor begins the case he can donate the blood by Providing all the necessary details .If in case of recipient begins the case ,he has two options either he can view the already donated blood details which matches his requirements or he can raise a new request.After his submission of request the case move forwards to donor where he can accept the request & confirm his details and also assist for volunteer service..thus,with help of volunteer support requirements are delivered from donor to recipient.

8)Share Organ: This case type can be accessed by donor/Recipient.If donor begins the case he need to provide his details If he had his organ registration certificate he need to attach it then the admin will review. Then the admin will approve /reject his request. If he does not have his organ registration certificate he can request for medical assistance.Now the hospital details are mailed to the donor. He have to visit the hospital within 5 days.Then the will check him and provide the certificate. If the case is begins by the recipient he can request for the organ.Noe he need to provide the required information and the certificate.Then the case will be routed to Donor hospital. If the donor is available the details will be send through mail.Then he will contact the hospital for further procedure. his hackathon is one of the memorable learning experience for our team .We had a great learning experience of new rules such as creating dynamic routing etc.

What's Next

As a team we ensure to give our cent percent effort to bring solutions to the problems & Challenges prevail in the society.

More Details: Helping the people who are in need

Submitted By


Image Captioning Bot Using Rnn And Cnn

Last Updated on May 3, 2021

About

What does an Image Captioning Problem entail?

Suppose you see this picture –

What is the first thing that comes to you mind? (PS: Let me know in the comments below!).

Here are a few sentences that people could come up with :

A man and a girl sit on the ground and eat .
A man and a little girl are sitting on a sidewalk near a blue bag eating .
A man wearing a black shirt and a little girl wearing an orange dress share a treat .

A quick glance is sufficient for you to understand and describe what is happening in the picture. Automatically generating this textual description from an artificial system is the task of image captioning.

The task is straightforward – the generated output is expected to describe in a single sentence what is shown in the image – the objects present, their properties, the actions being performed and the interaction between the objects, etc. But to replicate this behaviour in an artificial system is a huge task, as with any other image processing problem and hence the use of complex and advanced techniques such as Deep Learning to solve the task.

Before I go on, I want to give special thanks to Andrej Kartpathy et. al, who helped me understand the topic with his insightful course – CS231n.

 

Methodology to Solve the Task

The task of image captioning can be divided into two modules logically – one is an image based model – which extracts the features and nuances out of our image, and the other is a language based model – which translates the features and objects given by our image based model to a natural sentence.

For our image based model (viz encoder) – we usually rely on a Convolutional Neural Network model. And for our language based model (viz decoder) – we rely on a Recurrent Neural Network. The image below summarizes the approach given above.

Usually, a pretrained CNN extracts the features from our input image. The feature vector is linearly transformed to have the same dimension as the input dimension of the RNN/LSTM network. This network is trained as a language model on our feature vector.

For training our LSTM model, we predefine our label and target text. For example, if the caption is “A man and a girl sit on the ground and eat.”, our label and target would be as follows –

Label – [ <start>, A, man, and, a, girl, sit, on, the, ground, and, eat, . ] 

Target – [ A, man, and, a, girl, sit, on, the, ground, and, eat, ., <end> ]

This is done so that our model understands the start and end of our labelled sequence.

 

 

Walkthrough of Implementation

Let’s look at a simple implementation of image captioning in Pytorch. We will take an image as input, and predict its description using a Deep Learning model.

The code for this example can be found on GitHub. The original author of this code is Yunjey Choi. Hats off to his excellent examples in Pytorch!

In this walkthrough, a pre-trained resnet-152 model is used as an encoder, and the decoder is an LSTM network.

To run the code given in this example, you have to install the pre-requisites. Make sure you have a working python environment, preferably with anaconda installed. Then run the following commands to install the rest of the required libraries.

git clone https://github.com/pdollar/coco.git

cd coco/PythonAPI/
make
python setup.py build
python setup.py install

cd ../../

git clone https://github.com/yunjey/pytorch-tutorial.git
cd pytorch-tutorial/tutorials/03-advanced/image_captioning/

pip install -r requirements.txt

After you have setup your system, you should download the dataset required to train the model. Here we will be using the MS-COCO dataset. To download the dataset automatically, you can run the following commands:

chmod +x download.sh
./download.sh

Now you can go on and start your model building process. First – you have to process the input:

# Search for all the possible words in the dataset and 
# build a vocabulary list
python build_vocab.py   

# resize all the images to bring them to shape 224x224
python resize.py

Now you can start training your model by running the below command:

python train.py --num_epochs 10 --learning_rate 0.01

Just to peek under the hood and check out how we defined our model, you can refer to the code written in the model.py file.

import torch
import torch.nn as nn
import torchvision.models as models
from torch.nn.utils.rnn import pack_padded_sequence
from torch.autograd import Variable

class EncoderCNN(nn.Module):
    def __init__(self, embed_size):
        """Load the pretrained ResNet-152 and replace top fc layer."""
        super(EncoderCNN, self).__init__()
        resnet = models.resnet152(pretrained=True)
        modules = list(resnet.children())[:-1]      # delete the last fc layer.
        self.resnet = nn.Sequential(*modules)
        self.linear = nn.Linear(resnet.fc.in_features, embed_size)
        self.bn = nn.BatchNorm1d(embed_size, momentum=0.01)
        self.init_weights()
        
    def init_weights(self):
        """Initialize the weights."""
        self.linear.weight.data.normal_(0.0, 0.02)
        self.linear.bias.data.fill_(0)
        
    def forward(self, images):
        """Extract the image feature vectors."""
        features = self.resnet(images)
        features = Variable(features.data)
        features = features.view(features.size(0), -1)
        features = self.bn(self.linear(features))
        return features
    
    
class DecoderRNN(nn.Module):
    def __init__(self, embed_size, hidden_size, vocab_size, num_layers):
        """Set the hyper-parameters and build the layers."""
        super(DecoderRNN, self).__init__()
        self.embed = nn.Embedding(vocab_size, embed_size)
        self.lstm = nn.LSTM(embed_size, hidden_size, num_layers, batch_first=True)
        self.linear = nn.Linear(hidden_size, vocab_size)
        self.init_weights()
    
    def init_weights(self):
        """Initialize weights."""
        self.embed.weight.data.uniform_(-0.1, 0.1)
        self.linear.weight.data.uniform_(-0.1, 0.1)
        self.linear.bias.data.fill_(0)
        
    def forward(self, features, captions, lengths):
        """Decode image feature vectors and generates captions."""
        embeddings = self.embed(captions)
        embeddings = torch.cat((features.unsqueeze(1), embeddings), 1)
        packed = pack_padded_sequence(embeddings, lengths, batch_first=True) 
        hiddens, _ = self.lstm(packed)
        outputs = self.linear(hiddens[0])
        return outputs
    
    def sample(self, features, states=None):
        """Samples captions for given image features (Greedy search)."""
        sampled_ids = []
        inputs = features.unsqueeze(1)
        for i in range(20):                                    # maximum sampling length
            hiddens, states = self.lstm(inputs, states)        # (batch_size, 1, hidden_size), 
            outputs = self.linear(hiddens.squeeze(1))          # (batch_size, vocab_size)
            predicted = outputs.max(1)[1]
            sampled_ids.append(predicted)
            inputs = self.embed(predicted)
            inputs = inputs.unsqueeze(1)                       # (batch_size, 1, embed_size)
        sampled_ids = torch.cat(sampled_ids, 1)                # (batch_size, 20)
        return sampled_ids.squeeze()

Now we can test our model using:

python sample.py --image='png/example.png'

For our example image, our model gives us this output:

<start> a group of giraffes standing in a grassy area . <end>

And that’s how you build a Deep Learning model for image captioning!

 

Conclusion

The model which we saw above was just the tip of the iceberg. There has been a lot of research done on this topic. Currently, the state-of-the-art model in image captioning is Microsoft’s CaptionBot. You can look at a demo of the system on their official website (link : www.captionbot.ai).

I will list down a few ideas which you can use to build a better image captioning model.

 

More Details: Image Captioning Bot using RNN and CNN

Submitted By


Image Classification Using Machine Learning

Last Updated on May 3, 2021

About

This is a prototype that shows the given specific image will belong to which category. Here any images can be taken to classify the difference. The main theme is to predict that the given image will belong to which category we had considered.

In this prototype I downloaded images of three different dog breeds named Doberman, golden retriever and shihtzu. The first step is to preprocess data which basically means converting the images into an numpy array and this process named as flattening the image. This numpy array should be the input of the image.


After preprocessing the data, the next step is to check the best suitable parameters for the machine learning algorithm. After getting the parameters, I passed them into the machine learning algorithm as arguments and fit the model. From Sklearn import classification report, accuracy score, confusion matrix which helps us to get brief understanding about our model. The model can be loaded into file using pickle library.


Now the last step is to predict the output. For this I took a input field which takes a URL as an input. The URL should be the image of the dog for which the output is predicted. In the same way we have to flatten the image into a numpy array and predict output for that. The output will show the predicted output that is which breed that the dog belongs to and the image we are checking the output for.


The main theme of this project is to train the computer to show the difference between different classes considered.

More Details: Image Classification using Machine learning

Submitted By


Learning Management System

Last Updated on May 3, 2021

About

I have made Learning Management System website using HTML/CSS/JavaScript. It can provide tremendous benefits both for the training department and for the organisation in general. This system will help you deliver and manage training in numerous formats.

In this project, there is also implementation of chatbox where anyone can write messages and able to send.

In this project, there is an implementation of login page/ registration page where user have to do registration for accessing various facilities like choosing course for their studies, content of courses,etc. from website. After successfully registration by user, they can choose courses accordingly.

Content of courses is also available on the site.If there is any problem/queries/doubts related to courses,site , or related to preparation material then they can freely to ask in chatbox. They can write their questions in chatbox , then our experts will review it and clear their doubts.

There is also a online test facility on different courses like on JavaScript,HTML, CSS,etc. By giving these types of online tests, user will be able to test their knowledge on its own and they can also improve it by giving more tests. There is also a section in the site, where user can directly contact the manager of the site either through LinkedIn/Instagram/Facebook,etc. If user wants to know more information about learning management system website, then they can go to main dashboard where they will be able to know more about the website.

So, these are the implementations which I have introduced here briefly about the learning management system website.


Duration :- 11 months

My role in this project is of Developer.

Skills used :- HTML/CSS/JavaScript

More Details: Learning Management System

Submitted By


Machine Learning Implementation On Crop Health Monitoring System.

Last Updated on May 3, 2021

About

The objective of our study is to provide a solution for Smart Agriculture by monitoring the agricultural field which can assist the farmers in increasing productivity to a great extent. Weather forecast data obtained from IMD (Indian Metrological Department) such as temperature and rainfall and soil parameters repository gives insight into which crops are suitable to be cultivated in a particular area. Thus, the proposed system takes the location of the user as an input. From the location, the soil moisture is obtained. The processing part also take into consideration two more datasets i.e. one obtained from weather department, forecasting the weather expected in current year and the other data being static data. This static data is the crop production and data related to demands of various crops obtained from various government websites. The proposed system applies machine learning and prediction algorithm like Decision Tree, Naive Bayes and Random Forest to identify the pattern among data and then process it as per input conditions. This in turn will propose the best feasible crops according to given environmental conditions. Thus, this system will only require the location of the user and it will suggest number of profitable crops providing a choice directly to the farmer about which crop to cultivate. As past year production is also taken into account, the prediction will be more accurate.


More Details: MACHINE LEARNING IMPLEMENTATION ON CROP HEALTH MONITORING SYSTEM.

Submitted By