Web Base Application Heart Failure Prediction System

Last Updated on May 3, 2021

About

In this situation, approximately 17 million people kill globally per year in the whole world because of cardiovascular disease, and they mainly exhibit myocardial-exhibit myocardial infarction and heart failure. Heart failure (HF) occurs when the heart cannot pump enough blood to meet the needs of the body.

In this heart prediction problem statement, we are trying to predict whether the patient's heart muscle pumps blood properly or not using Logistic Regression. In this project, a dataset is downloaded from the UCI repository and this dataset is real. this dataset is collected from one of the most famous hospitals is in the United Kingdom (UK) in 2015 and there are 299 patient records and 12 features(attribute) and one label. Based on that 12 features, we will predict whether the patient's heart working properly or not.

In this problem statement, we analyze a dataset of 299 patients with heart failure collected in 2015. We apply several machine learning & classifiers to both predict the patient’s survival, and rank the features corresponding to the most important risk factors. We also perform an alternative feature ranking analysis by employing traditional biostatistics tests and compare these results with those provided by the machine learning algorithms. Since both feature ranking approaches clearly identify serum creatinine and ejection fraction as the two most relevant features, we then build the machine learning survival prediction models on these two factors alone.

For model building we use various library packages like Pandas, Scikit learns (sklearn), matplotlib, Seaborn, Tensorflow, Keras, etc., then we will use data description, Data description involves carrying out initial analysis on the data to understand more about the data, its source, volume, attributes, and relationships. Once these details are documented, any shortcomings if noted should be informed to relevant personnel. after that, we use the data cleaning method for cleaning the dataset to check if there are any missing values or not and we split the dataset into training & testing purposes with 70%, 30% criteria. Then the next step is Model Building, The process of model building is also known as training the model using data and features from our dataset. A combination of data (features) and Machine Learning algorithms together give us a model that tries to generalize on the training data and give necessary results in the form of insights and/or predictions. Generally, various algorithms are used to try out multiple modeling approaches on the same data to solve the same problem to get the best model that performs and gives outputs that are the closest to the business success criteria. Key things to keep track of here are the models created, model parameters being used, and their results. And the last step is to analyze the result in this step we check our model score or accuracy by using Confusion Matrix and Model Score. For this model, we got 80% accuracy. In the future, we try to improve that accuracy. For model deployment, we use the python flask and based on that we build the web-based application.


More Details: Web Base Application Heart Failure Prediction System

Submitted By


Share with someone who needs it

Govindasamy Bala - Iisc Bangalore | A Significant Step In Understanding Climate Change

Last Updated on May 3, 2021

About

We are currently living in what is considered as the Anthropocene, an age in which human activities have a significant impact on the planet. Perhaps the most serious damage inflicted by humans has been on its climate. Climate change, driven by an increase in the average surface temperature of Earth, results from a surge in radiative forcing—the difference between the energy received by Earth and the energy radiated back to space. The radiative forcing agents of the industrial era include greenhouse gases such as carbon dioxide (CO2) and methane, which trap the longwave radiation emitted by our planet. To measure how effective a forcing agent is in causing Earth’s climate to change, researchers use the concept of efficacy, defined as the ratio of global temperature change due to that particular forcing agent to the temperature change caused by CO2 for the same radiative forcing value.

In a new study, GovindasamyBala—one of India’s most well-known climate scientists—and his student, AngshumanModak, addressed the issue of the efficacy of the incident solar radiation relative to CO2.* Using a modelling approach, they considered climate system responses during three different time periods: a week, four months and a century.

“What we found was that the Sun is less effective than CO2 in causing climate change,” says Bala. In fact, the study shows that solar forcing is only 80% as effective as CO2 forcing. “This means that for the same radiative forcing, if CO2 causes 1 °C warming, the Sun causes only 0.8 °C warming,” he continues. This finding, Bala argues, is crucial not just for our understanding of the mechanisms of climate change but also for the formulation of more effective climate change policies. 

More Details: Govindasamy Bala - IISC Bangalore | A Significant Step in understanding climate change

Submitted By


Image Processing

Last Updated on May 3, 2021

About

What is image processing ?

The aim of pre-processing is an improvement of the image data that suppresses unwilling distortions or enhances some image features important for further processing, although geometric transformations of images (e.g. rotation, scaling, translation) are classified among pre-processing methods here since similar.

Preprocessing refers to all the transformations on the raw data before it is fed to the machine learning or deep learning algorithm. For instance, training a convolutional neural network on raw images will probably lead to bad classification performances.


convolutional neural network

convolutional neural network (CNN) is a type of artificial neural network used in image recognition and processing that is specifically designed to process pixel data. ... A neural network is a system of hardware and/or software patterned after the operation of neurons in the human brain.


CNNs are used for image classification and recognition because of its high accuracy. The CNN follows a hierarchical model which works on building a network, like a funnel, and finally gives out a fully-connected layer where all the neurons are connected to each other and the output is processed.


Problem description:  Case study , We have a dataset has 3 subfolders inside it are single prediction contains only 2 images to test the model and prediction so that we know our CNN model is working , test set with 2000 images (1000 of dogs and 1000 of cats) where we will evaluate our model , training set contains 8000 images 4000 of cats and 4000 of dogs as we are going to train our CNN model on these images of dogs and cats . so basically our CNN model is going to predict whether the image given is of a a cat or a dog. By generating random number on google then choosing the image .  Eg: cat


Prediction for CAT

PREDICTION FOR DOG

More Details: IMAGE PROCESSING

Submitted By


Go Alexa Go

Last Updated on May 3, 2021

About

GO ALEXA GO was created with millennials in mind. Why drive and text when you can just text and have Alexa drive?

Inspiration

We were inspired by Millennials' dangerous texting and driving habits, so we developed a driving system to allow them to text and still drive at the same time.

What it does

Our HTC Vive virtual reality experience allows the user to issue commands to our taxi driver, Alexa, and explore Sponsorville.

How we built it

We built our HTC Vive VR experience in Unity using C# and our Amazon backend with node.js and the Alexa skillset. The Amazon Alexa is able to take a user's directional input voice command through Amazon's unique browser-based web services built with node.js, and notifies Unity of the user's input with a web API hosted on Microsoft Azure.

Challenges we ran into

The first challenge we ran to was division of work. Charlie became our Unity/C#/HTC-Vive programmer, Randy became our impromptu Scrum Master/Front-End Designer/3D-modeler, and Caleb and Colin worked on node.js/Azure-IoT/Amazon Web Services. After we had a better sense of everyone's skill-set and strengths, we were able to snowball each other consistently throughout the course of the hackathon. Regarding Unity and C#, we ran into rigidbody and trigger debugging issues early on. With Alexa, we had troubles getting the browser based web service to work with node.js/Azure but by the middle of the second day, we were able to create a working prototype.

Accomplishments that we're proud of

Getting an Amazon Alexa to take voice commands and convert them to directional output in a Unity VR environment.

What we learned

Make sure you go into a hackathon with your division of work ready between your teammates. Additionally, make sure you teammates actually have a solid background in coding the work that is handed to them. Get together with your teammates every few hours, AGILE style, and see what progress has been made and if anyone needs help. Make sure everyone on your team can at some point handle paperwork because there will be a good amount of it throughout the course of the hackathon from the gathering of your teammates, to the final 12 hours before showtime. There needs to be a HUGE sense of trust between you and your teammates. Without some form of solid workflow (we used 2-hour scrums), you can run into problems like people just going off and coding who knows what for 3-4 hours of your hackathon before you realize you have issues.

What's next for Go Alexa Go

We plan on buying our own private islands and moving there with our solid-gold rocket ships from the amount of sponsorship money we've made from our amazing SponsorVille sponsors at Spartahack 2017.

More Details: Go Alexa Go

Submitted By


Web Base Application Heart Failure Prediction System

Last Updated on May 3, 2021

About

In this situation, approximately 17 million people kill globally per year in the whole world because of cardiovascular disease, and they mainly exhibit myocardial-exhibit myocardial infarction and heart failure. Heart failure (HF) occurs when the heart cannot pump enough blood to meet the needs of the body.

In this heart prediction problem statement, we are trying to predict whether the patient's heart muscle pumps blood properly or not using Logistic Regression. In this project, a dataset is downloaded from the UCI repository and this dataset is real. this dataset is collected from one of the most famous hospitals is in the United Kingdom (UK) in 2015 and there are 299 patient records and 12 features(attribute) and one label. Based on that 12 features, we will predict whether the patient's heart working properly or not.

In this problem statement, we analyze a dataset of 299 patients with heart failure collected in 2015. We apply several machine learning & classifiers to both predict the patient’s survival, and rank the features corresponding to the most important risk factors. We also perform an alternative feature ranking analysis by employing traditional biostatistics tests and compare these results with those provided by the machine learning algorithms. Since both feature ranking approaches clearly identify serum creatinine and ejection fraction as the two most relevant features, we then build the machine learning survival prediction models on these two factors alone.

For model building we use various library packages like Pandas, Scikit learns (sklearn), matplotlib, Seaborn, Tensorflow, Keras, etc., then we will use data description, Data description involves carrying out initial analysis on the data to understand more about the data, its source, volume, attributes, and relationships. Once these details are documented, any shortcomings if noted should be informed to relevant personnel. after that, we use the data cleaning method for cleaning the dataset to check if there are any missing values or not and we split the dataset into training & testing purposes with 70%, 30% criteria. Then the next step is Model Building, The process of model building is also known as training the model using data and features from our dataset. A combination of data (features) and Machine Learning algorithms together give us a model that tries to generalize on the training data and give necessary results in the form of insights and/or predictions. Generally, various algorithms are used to try out multiple modeling approaches on the same data to solve the same problem to get the best model that performs and gives outputs that are the closest to the business success criteria. Key things to keep track of here are the models created, model parameters being used, and their results. And the last step is to analyze the result in this step we check our model score or accuracy by using Confusion Matrix and Model Score. For this model, we got 80% accuracy. In the future, we try to improve that accuracy. For model deployment, we use the python flask and based on that we build the web-based application.


More Details: Web Base Application Heart Failure Prediction System

Submitted By


Air Quality Analysis And Prediction Of Italian City

Last Updated on May 3, 2021

About

Problem statement

  • Predict
  • The value of CO in mg/m^3 reference value with respect to the available data. Please assume if you need, but do specify the same.
  • Forecast
  • The value pf CO in mg/m^3 for the next 3 3 weeks on hourly averaged concentration

Data Set Information

located on the field in a significantly polluted area, at road level,within an Italian city. Data were recorded from March 2004 to February 2005 (one year)representing the longest freely available recordings of on field deployed air quality chemical sensor devices responses. Ground Truth hourly averaged concentrations for CO, Non Metanic Hydrocarbons, Benzene, Total Nitrogen Oxides (NOx) and Nitrogen Dioxide (NO2) and were provided by a co-located reference certified analyzer. Evidences of cross-sensitivities as well as both concept and sensor drifts are present as described in De Vito et al., Sens. And Act. B, Vol. 129,2,2008 (citation required) eventually affecting sensors concentration

Data collection

0 Date (DD/MM/YYYY)

1 Time (HH.MM.SS)

2 True hourly averaged concentration CO in mg/m^3 (reference analyzer)

3 PT08.S1 (tin oxide) hourly averaged sensor response (nominally CO targeted)

4 True hourly averaged overall Non Metanic HydroCarbons concentration in microg/m^3 (reference analyzer)

5 True hourly averaged Benzene concentration in microg/m^3 (reference analyzer)

6 PT08.S2 (titania) hourly averaged sensor response (nominally NMHC targeted)

7 True hourly averaged NOx concentration in ppb (reference analyzer)

8 PT08.S3 (tungsten oxide) hourly averaged sensor response (nominally NOx targeted)

9 True hourly averaged NO2 concentration in microg/m^3 (reference analyzer)

10 PT08.S4 (tungsten oxide) hourly averaged sensor response (nominally NO2 targeted)

11 PT08.S5 (indium oxide) hourly averaged sensor response (nominally O3 targeted)

12 Temperature in °C

13 Relative Humidity (%)

14 AH Absolute Humidity.

More Details: Air quality analysis and Prediction of Italian city

Submitted By