Web Forum – Covid Action Platform

Last Updated on May 3, 2021


The dramatic spread of COVID-19 has disrupted lives, livelihoods, communities and businesses.


People need to know necessary actions, precautions, daily report and queries about covid.

  In response to this emergency , we developed covid action platform web forum page.


This project is aimed at developing online form for the users group discussion covid


This is web based tool .Any user can post the doubts topics and reply for the other user doubts.


Nowadays ,People facing a lot of fake news and the means at which they communicate with one another to deliberate on a solution has always been very difficult .


That’s why there is need for the provision of an efficient and easy way users can actually relate each other so the harness the strength in teaming up while solving problem .The covid action platform forum provides a platform.


Submitted By

Share with someone who needs it


Last Updated on May 3, 2021


Imagine a satellite which enables anyone to avoid thinking in data transfer, energy and all of those nuisances.

What it does

HackSat consists of a prototype for a CubeSat blueprint which will allow anyone who wants to do any experimentation up in outer space to avoid worrying about how to send data or how to provide energy and start thinking about which data will be sent and when they will sent it.

It is also worth noticing that everything will be released under an Open Source License

How we built it

We designed the basic structure, based on the CubeSat specs provided by California Polytechnic State University and used by NASA to send low cost satellites.

We printed the structure by means of a couple 3d printers.

We handcrafted all electronics by using a combination of 3 Arduinos, which required us to search for low consuming components, in order to maximize the battery power, we also work on minimize the energy consumption for the whole satellite.

We opted to use recycled components, like solar panels, cables, battery, converter...

We worked a lot on the data transfer part, so it allows the Sat to be sleeping by the most part, on an effort to increase even more the battery life.

And almost 24hours of nonstop work and a lot of enthusiasm!!

Challenges we ran into

We find mostly challenging the electronics, because our main objective was to get the optimal energy out of our battery and avoid draining it too fast.

Another point worth mentioning was the data transfer between the experiment section and the Sat section, because we wanted to isolate each part as much as possible from the other, so the experiment just need to tell the Sat to send the data and nothing more.

Accomplishments that we are proud of

We are very proud to have accomplished the objective of making a viable prototype, even though we have faced some issues during these days, nonetheless we managed to overcome all of those issues and as a consequence we have grown wiser and our vision has become wider.

What we learned

During the development for HackSat, we have learned a lot about radio transmission, a huge lot about serial port and how to communicate data between 3 different micros, using 2 different protocols.

What's next for HackSat

The first improvement that should be made is fix some issues we encountered with the measures of our designs, which have required some on site profiling.

Another obvious improvement is update the case so it is made of aluminium instead of plastic, which is the first blocking issue at the moment for HackSat to be launched.

Finally, we would change the hardware so it has more dedicated hardware which most likely will allow us to optimise even the battery consumption and global lifespan for the Sat.

More Details: HackSat

Submitted By

Cluster Ai

Last Updated on May 3, 2021


Explore a galaxy of research papers in 3D space using a state-of-the-art machine learning model.


Search engines like Google Scholar make it easy to find research papers on a specific topic. However, it can be hard to branch out from a general position to find topics for your research that need to be specified. Wouldn’t it be great to have a tool that not only recommends you research papers, but does it in a way that makes it easy to explore other related topics and solutions to your topic?

What it does

Users will input either a text query or research paper into Cluster AI. Cluster AI uses BERT (Bidirectional Encoder Representations from Transformers), a Natural Language Processing model, in order to connect users to similar papers. Cluster AI uses the CORE Research API to fetch research articles that may be relevant, then visualizes the similarity of these papers in a 3d space. Each node represents a research paper, and the distances between the nodes show the similarity between those papers. Using this, users can visualize clusters of research papers with close connections in order to quickly find resources that pertain to their topic.

Test Cluster AI here

Note: Running on CPU based server, deploying your own Django server using instructions in the Source Code is highly recommended. Demo may have delays depending on the query and number of users at any given point. 10-100 papers, but up to 20 papers requested in the query will be optimal.

Check out the Source Code!

How we built it

We used a multitude of technologies, languages, and frameworks in order to build ClusterAI.

  1. BERT (Bidirectional Encoder Representations from Transformers) and MDS (Multidimensional Scaling) with PyTorch for the Machine Learning
  2. Python and Django for the backend
  3. Javascript for the graph visualizations (ThreeJS/WebGL)
  4. Bootstrap/HTML/CSS/Javascript for the frontend

Challenges we ran into

The CORE Research API did not always provide all the necessary information that was requested. It sometimes returned papers not in English or without abstracts. We were able to solve this problem by validating the results ourselves. Getting the HTML/CSS to do exactly what we wanted gave us trouble.

Accomplishments that we're proud of

We worked with a state-of-the-art natural language processing model which successfully condensed each paper into a 3D point.

The visualization of the graph turned out great and let us see the results of the machine learning techniques we used and the similarities between large amounts of research papers.

What we learned

We learned more about HTML, CSS, JavaScript, since the frontend required new techniques and knowledge to accomplish what we wanted. We learned more about the BERT model and dimensionality reduction. The semantic analysis of each paper’s abstract the BERT model provided served as the basis for condensing each paper into 3D points.

What's next for Cluster AI

We can add filtering to the nodes so that only nodes of a given specification are shown. We can expand Cluster AI to visualize other corpora of text, such as books, movie scripts, or news articles. Some papers are in different languages; we would like to use an API to convert the different languages into a person’s native language, so anyone will be able to read the papers.

More Details: Cluster AI

Submitted By


Last Updated on May 3, 2021


A Chrome extension that fights online harassment by filtering out comments with strong language.


YouTube is a place for millions of people to share their voices and engage with their communities. Unfortunately, the YouTube comments section is notorious for enabling anonymous users to post hateful and derogatory messages with the click of a button. These messages are purely meant to cause anger and depression without ever providing any constructive criticism. For YouTubers, this means seeing the degrading and mentally-harmful comments on their content, and for the YouTube community, this means reading negative and offensive comments on their favorite videos. As young adults who consume this online content, we feel as though it is necessary to have a tool that combats these comments to make YouTube a safer place.

What it does

HackTube automatically analyzes every YouTube video you watch, targeting comments which are degrading and offensive. It is constantly checking the page for hateful comments, so if the user loads more comments, the extension will pick those up. It then blocks comments which it deems damaging to the user, listing the total number of blocked comments at the top of the page. This process is all based on user preference, since the user chooses which types of comments (sexist, racist, homophobic, etc) they do not want to see. It is important to note that the user can disable the effects of the extension at any time. HackTube is not meant to censor constructive criticism; rather, it combats comments which are purely malicious in intent.

How we built it

HackTube uses JavaScript to parse through every YouTube comment almost instantly, comparing its content to large arrays that we made which are full of words that are commonly used in hate speech. We chose our lists of words carefully to ensure that the extension would focus on injurious comments rather than helpful criticism. We used standard HTML and CSS to style the popup for the extension and the format of the censored comments.

Challenges we ran into

We are trying to use cookies to create settings for the user which would be remembered even after the user closes the browser. That way anyone who uses HackTube will be able to choose exactly which types of comments they don't want to see and then have those preferences remembered by the extension. Unfortunately, Chrome blocks the use of cookies unless you use a special API, and we didn't have enough time to complete our implementation of that API at this hackathon.

Accomplishments that we're proud of

We are proud of making a functional product that can not only fight online harassment and cyberbullying but also appeal to a wide variety of people.

What we learned

We learned how to dynamically alter the source code of a webpage through a Chrome extension. We also learned just how many YouTube comments are full of hate and malicious intent.

What's next for HackTube

Right now, for demo purposes, HackTube merely changes the hateful comments into a red warning statement. In the future, HackTube will have an option to fully take out the malicious comment, so users’ YouTube comments feed will be free of any trace of hateful comments. Users won’t have to worry about how many comments were flagged and what they contained. Additionally, we will have a way for users to input their own words that offend them and take the comments that contain those words out of the section.

More Details: HackTube

Submitted By

Navassist Ai

Last Updated on May 3, 2021


Incorporating machine learning, and haptic feedback, NavAssistAI detects the position and state of a crosswalk light, which enables it to aid the visually impaired in daily navigation.


One day, we were perusing youtube looking for an idea for our school's science fair. On that day, we came across a blind YouTuber named Tommy Edison. He had uploaded a video of himself attempting to cross a busy intersection on his own. It was apparent that he was having difficulty, and at one point he almost ran into a street sign. After seeing his video, we decided that we wanted to leverage new technology to help people like Tommy in daily navigation, so we created NavAssist AI.

What it does

In essence, NavAssist AI uses object detection to detect both the position and state of a crosswalk light (stop hand or walking person). It then processes this information and relays it to the user through haptic feedback in the form of vibration motors inside a headband. This allows the user to understand whether it is safe to cross the street or not, and in which direction they should face when crossing.

How we built it

We started out by gathering our own dataset of 200+ images of crosswalk lights because there was no existing library for those images. We then ran through many iterations on many different models, training each model on this data set. Through the different model architectures and iterations, we strove to find a balance between accuracy and speed. We eventually discovered an SSDLite Mobilenet model from the TensorFlow model zoo had the balance we required. Using transfer learning and many iterations we trained a model that finally worked. We implemented it onto a raspberry pi with a camera, soldered on a power button and vibration motors, and custom-designed a 3D printed case with room for a battery. This made our prototype wearable device.

Challenges we ran into

When we started this project, we knew nothing about machine learning or TensorFlow and had to start from scratch. However, with some googling and trying stuff out, we were able to figure out how to implement TensorFlow for our project with relative ease. Another challenge was collecting, preparing, and labelling our data set of 200+ images. Although, our most important challenge was not knowing what it's like to be visually impaired. To overcome this, we had to go out to people in the blind community and talk to them so that we could properly understand the problem and create a good solution.

Accomplishments that we're proud of

  • Making our first working model that could tell the difference between stop and go
  • Getting the haptic feedback implementation to work with the Raspberry Pi
  • When we first tested the device and successfully crossed the street
  • When we presented our work at TensorFlow World 2019

All of these milestones made us very proud because we are progressing towards something that could really help people in the world.

What we learned

Throughout the development of this project, we learned so much. Going into it, we had no idea what we were doing. Along the way, we learned about neural networks, machine learning, computer vision, as well as practical skills such as soldering and 3D CAD. Most of all, we learned that through perseverance and determination, you can make progress towards helping to solve problems in the world, even if you don't initially think you have the resources or knowledge.

What's next for NavAssistAI

We hope to expand its ability for detecting objects. For example, we would like to add detection for things such as obstacles so that it may aid in more than just crossing the street. We are also working to make the wearable device smaller and more portable, as our first prototype can be somewhat burdensome. In the future, we hope to eventually reach a point where it will be marketable, and we can start helping people everywhere.

More Details: NavAssist AI

Submitted By

Image Super Resolution Using Autoencoders In Keras

Last Updated on May 3, 2021


In this project, we learned about basic functionality of auto-encoders and implemented an Image Super-Resolution enhancement task. This task could have multiple use cases in daily lifestyles. For example, we can use this technique to enhance the quality of low-resolution videos as well. So, even without labels, we can work with the image data and solve several real-world problems.We will be working on ‘Labeled Faces in the Wild Home’ dataset. This dataset contains a database of labelled faces, generally used for face recognition and detection. However, our aim is not to detect faces but to make a model to enhance image resolution.The dataset comprises of multiple sub directories containing various images of that person. Hence, it is important to capture image paths from these directories.

Load and Preprocess Images

The size of original images are of 250 x 250 pixels. However, it would take a lot computation power to process these images on normal computer. Therefore, we will reduce the size of all images to 80 x 80 pixels.

As there are around 13,000 images, it would take lot of time if we process it individually. Hence, we take advantage of multiprocessing library provided in python for ease of execution.
tqdm is a progress library that we use to get a progress bar of the work done.
from tqdm import tqdm
from multiprocessing import Pool

progress = tqdm(total= len(face_images), position=0)
def read(path):
  img = image.load_img(path, target_size=(80,80,3))
  img = image.img_to_array(img)
  img = img/255.
  return img
p = Pool(10)
img_array = p.map(read, face_images)

In order to save time in future, let’s store our img_array (contains images) with the help of pickle library:

with open('img_array.pickle','wb') as f:
  pickle.dump(img_array, f)


Data preparation for Model Training

Now, we will split our dataset to train and validation set. We will use train data to train our model and validation data will be used to evaluate the model.

all_images = np.array(img_array)

#Split test and train data. all_images will be our output images
train_x, val_x = train_test_split(all_images, random_state = 32, test_size=0.2)

As this is an image resolution enhancement task we will distort our images and take it as an input images. The original images will be added as our output images.

#now we will make input images by lowering resolution without changing the size
def pixalate_image(image, scale_percent = 40):
  width = int(image.shape[1] * scale_percent / 100)
  height = int(image.shape[0] * scale_percent / 100)
  dim = (width, height)
  small_image = cv2.resize(image, dim, interpolation = cv2.INTER_AREA)
  # scale back to original size
  width = int(small_image.shape[1] * 100 / scale_percent)
  height = int(small_image.shape[0] * 100 / scale_percent)
  dim = (width, height)
  low_res_image = cv2.resize(small_image, dim, interpolation =  cv2.INTER_AREA)
 return low_res_image
The idea is to take these distorted images and feed it to our model and make model learn to get the original image back.

train_x_px = []

for i in range(train_x.shape[0]):
  temp = pixalate_image(train_x[i,:,:,:])
train_x_px = np.array(train_x_px)   #Distorted images
# get low resolution images for the validation set
val_x_px = []
for i in range(val_x.shape[0]):
  temp = pixalate_image(val_x[i,:,:,:])
val_x_px = np.array(val_x_px)     #Distorted images

Input Image

Original Image

Model building

Let's define the structure of model. Moreover, to overcome the possibility of over-fitting, we are using l1 regularization technique in our convolution layer.

Input_img = Input(shape=(80, 80, 3))  
#encoding architecture
x1 = Conv2D(64, (3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l1(10e-10))(Input_img)
x2 = Conv2D(64, (3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l1(10e-10))(x1)
x3 = MaxPool2D(padding='same')(x2)
x4 = Conv2D(128, (3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l1(10e-10))(x3)
x5 = Conv2D(128, (3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l1(10e-10))(x4)
x6 = MaxPool2D(padding='same')(x5)
encoded = Conv2D(256, (3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l1(10e-10))(x6)
#encoded = Conv2D(64, (3, 3), activation='relu', padding='same')(x2)
# decoding architecture
x7 = UpSampling2D()(encoded)
x8 = Conv2D(128, (3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l1(10e-10))(x7)
x9 = Conv2D(128, (3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l1(10e-10))(x8)
x10 = Add()([x5, x9])
x11 = UpSampling2D()(x10)
x12 = Conv2D(64, (3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l1(10e-10))(x11)
x13 = Conv2D(64, (3, 3), activation='relu', padding='same', kernel_regularizer=regularizers.l1(10e-10))(x12)
x14 = Add()([x2, x13])
# x3 = UpSampling2D((2, 2))(x3)
# x2 = Conv2D(128, (3, 3), activation='relu', padding='same')(x3)
# x1 = Conv2D(256, (3, 3), activation='relu', padding='same')(x2)
decoded = Conv2D(3, (3, 3), padding='same',activation='relu', kernel_regularizer=regularizers.l1(10e-10))(x14)
autoencoder = Model(Input_img, decoded)
autoencoder.compile(optimizer='adam', loss='mse', metrics=['accuracy'])

You can modify this model as per your choice and requirement to get better results. You can change number of layers, number of units or some regularization techniques too. For the time being, let’s move forward and see what our model looks like!


Screenshot of the model summary

Model Training

We will first define some callbacks so that it would be easy for model visualization and evaluation in future.

early_stopper = EarlyStopping(monitor='val_loss', min_delta=0.01, patience=50, verbose=1, mode='min')

model_checkpoint =  ModelCheckpoint('superResolution_checkpoint3.h5', save_best_only = True)

Let's train our model:

history = autoencoder.fit(train_x_px,train_x,
            validation_data=(val_x_px, val_x),
            callbacks=[early_stopper, model_checkpoint])
The execution time was around 21 seconds per epoch on 12GB NVIDIA Tesla K80 GPU. EarlyStopping was achieved at 65th epoch.

Now, let's evaluate our model on our test dataset:

results = autoencoder.evaluate(val_x_px, val_x)
print('val_loss, val_accuracy', results)
val_loss, val_accuracy [0.002111854264512658, 0.9279356002807617]

We are getting some pretty good results from our model with around 93% validation accuracy and a validation loss of 0.0021.

Make Predictions

predictions = autoencoder.predict(val_x_px)

n = 4
plt.figure(figsize= (20,10))
for i in range(n):
  ax = plt.subplot(3, n, i+1)
  ax = plt.subplot(3, n, i+1+n)

1st row — Input Images & 2nd row — Output Images

In this project, we learned about basic functionality of auto-encoders and implemented an Image Super-Resolution enhancement task. This task could have multiple use cases in daily lifestyles. For example, we can use this technique to enhance the quality of low-resolution videos as well. So, even without labels, we can work with the image data and solve several real-world problems.

More Details: Image Super Resolution using Autoencoders in keras

Submitted By