Interact With Quantum Computing Hardware Devices Using Amazon BracketLast Updated on May 3, 2021
The Amazon Braket Python SDK is an open source library that provides a framework that you can use to interact with quantum computing hardware devices through Amazon Braket.
Before you begin working with the Amazon Braket SDK, make sure that you've installed or configured the following prerequisites.
Python 3.7.2 or greater
Download and install Python 3.7.2 or greater from Python.org.
Install Git from https://git-scm.com/downloads. Installation instructions are provided on the download page.
IAM user or role with required permissions
As a managed service, Amazon Braket performs operations on your behalf on the AWS hardware that is managed by Amazon Braket. Amazon Braket can perform only operations that the user permits. You can read more about which permissions are necessary in the AWS Documentation.
The Braket Python SDK should not require any additional permissions aside from what is required for using Braket. However, if you are using an IAM role with a path in it, you should grant permission for iam:GetRole.
To learn more about IAM user, roles, and policies, see Adding and Removing IAM Identity Permissions.
Boto3 and setting up AWS credentials
Follow the installation instructions for Boto3 and setting up AWS credentials.
Note: Make sure that your AWS region is set to one supported by Amazon Braket. You can check this in your AWS configuration file, which is located by default at ~/.aws/config.
Configure your AWS account with the resources necessary for Amazon Braket
If you are new to Amazon Braket, onboard to the service and create the resources necessary to use Amazon Braket using the AWS console.
Installing the Amazon Braket Python SDK
The Amazon Braket Python SDK can be installed with pip as follows:
pip install amazon-braket-sdk
You can also install from source by cloning this repository and running a pip install command in the root directory of the repository:
git clone https://github.com/aws/amazon-braket-sdk-python.git cd amazon-braket-sdk-python pip install .
Check the version you have installed
You can view the version of the amazon-braket-sdk you have installed by using the following command:
pip show amazon-braket-sdk
You can also check your version of amazon-braket-sdk from within Python:
>>> import braket._sdk as braket_sdk >>> braket_sdk.__version__
Running a circuit on an AWS simulator
import boto3 from braket.aws import AwsDevice from braket.circuits import Circuit device = AwsDevice("arn:aws:braket:::device/quantum-simulator/amazon/sv1") s3_folder = ("amazon-braket-Your-Bucket-Name", "folder-name") # Use the S3 bucket you created during onboarding bell = Circuit().h(0).cnot(0, 1) task = device.run(bell, s3_folder, shots=100) print(task.result().measurement_counts)
The code sample imports the Amazon Braket framework, then defines the device to use (the SV1 AWS simulator). The s3_folder statement defines the Amazon S3 bucket for the task result and the folder in the bucket to store the task result. This folder is created when you run the task. It then creates a Bell Pair circuit, executes the circuit on the simulator and prints the results of the job.
Share with someone who needs it
Retail Analysis Of Walmart DataLast Updated on May 3, 2021
One of the leading retail stores in the US, Walmart, would like to predict the sales and demand accurately. There are certain events and holidays which impact sales on each day. There are sales data available for 45 stores of Walmart. The business is facing a challenge due to unforeseen demands and runs out of stock some times, due to the inappropriate machine learning algorithm. An
ideal ML algorithm will predict demand accurately and ingest factors like economic conditions including CPI, Unemployment Index, etc.
Walmart runs several promotional markdown events throughout the year. These markdowns precede prominent holidays, the four largest of all, which are the Super Bowl, Labour Day, Thanksgiving, and Christmas. The weeks including these holidays are weighted five times higher in the evaluation than non-holiday weeks. Part of the challenge presented by this competition is modeling the effects of markdowns on these holiday weeks in the absence of complete/ideal historical data. Historical sales data for 45 Walmart stores located in different regions are available.
This is the historical data which covers sales from 2010-02-05 to 2012-11-01, in the file Walmart_Store_sales. Within this file you will find the following fields:
- Store - the store number
- Date - the week of sales
- Weekly_Sales - sales for the given store
- Holiday_Flag - whether the week is a special holiday week 1 – Holiday week 0 – Non-holiday week
- Temperature - Temperature on the day of sale
- Fuel_Price - Cost of fuel in the region
- CPI – Prevailing consumer price index
- Unemployment - Prevailing unemployment rate
Super Bowl: 12-Feb-10, 11-Feb-11, 10-Feb-12, 8-Feb-13
Labour Day: 10-Sep-10, 9-Sep-11, 7-Sep-12, 6-Sep-13
Thanksgiving: 26-Nov-10, 25-Nov-11, 23-Nov-12, 29-Nov-13
Christmas: 31-Dec-10, 30-Dec-11, 28-Dec-12, 27-Dec-13
Basic Statistics tasks
- Which store has maximum sales
- Which store has maximum standard deviation i.e., the sales vary a lot. Also, find out the coefficient of mean to standard deviation
- Which store/s has good quarterly growth rate in Q3’2012
- Some holidays have a negative impact on sales. Find out holidays which have higher sales than the mean sales in non-holiday season for all stores together
- Provide a monthly and semester view of sales in units and give insights
For Store 1 – Build prediction models to forecast demand
- Linear Regression – Utilize variables like date and restructure dates as 1 for 5 Feb 2010 (starting from the earliest date in order). Hypothesize if CPI, unemployment, and fuel price have any impact on sales.
- Change dates into days by creating new variable.
Select the model which gives best accuracy.
Voice Of The DayLast Updated on May 3, 2021
The format used to work well on the radio, so we wanted to recreate those memories on Alexa.
What it does
Players can listen to up to three voice clips of well-known people and/or celebrities talking every day, as well as view a blurred image of the celebrity. After each clip, Alexa will ask you who you think is talking, and you must try to answer correctly. This will earn you a score for the monthly and all-time leader boards. The player can ask Alexa for hints, or to skip that voice clip to move onto the next one. Users scores are awarded depending on how many incorrect answers they gave for that voice and whether they used a hint or not. Users can also ask to hear yesterday’s answers, in case they couldn’t get the answers on that day.
How I built it
To create the structure of the skill, we used the Alexa Skills Kit CLI.
We used Amazons S3 storage for all our in-game assets such as the audio clips and images.
We used the Alexa Presentation Language to create the visual interface for the skill.
We used the Amazon GameOn SDK to create monthly and all-time leader boards for all users to enter without any sign up.
Every day, free users will be given the ‘easy’ clip to answer. The set of clips each day will be selected dependant on the day of the year. Users who have purchased premium gain access to the ‘medium’ and ‘hard’ clips every day, as well as being able to ask for hints for the voices, skip a voice if they are stuck, and enter scores onto the leader boards.
Accomplishments that I’m proud of
As well as creating a high-quality voice-only experience, we developed the skill to be very APL focused, which we are very proud of. The visual assets we used for the project were very high quality and we were able to dynamically display the appropriate data for each screen within the skill. The content loaded depends on who is talking, as well as the difficulty of the voice that the user is answering. APL also allowed us to blur and unblur the celebrity images, as opposed to having separate blurred and unblurred images for each person.
We were also very pleased with how we implemented the GameOn SDK SDK into the skill. When the user submits a score, they have a random avatar created for them, and their score will be submitted under this avatar. This avoids any sign up to use the leader boards, allowing all users to use it easily.
GameOn SDK also allows us to create monthly and all-time competitions/leader boards which all users are automatically entered.
What I learned
I have learnt how to develop APL as well as better practices for structuring it more efficiently. For example, there are many APL views in the project, all of which are almost identical, what I have learnt that would be more effective in future projects would be to condense these down into one primary view that I would use for each screen and just use the appropriate data.
I have also been able to hone prompts to the user for upsells and showing the leader boards. Testing has shown that constant prompts on each play for these things can become tedious to the user, and so we have reduced the frequency of these for a much better user experience.
Credit Card DetectionLast Updated on May 3, 2021
models trained to label anonymized credit card transactions as fraudulent or genuine. Dataset from Kaggle. In this project I build machine learning models to identify fraud in credit card transactions. I also make several data visualizations to reveal patterns and structure in the data.
The dataset, hosted on Kaggle, includes credit card transactions made by cardholders. The data contains 7983 transactions that occurred over of which 17 (0.21%) are fraudulent. Each transaction has 30 features, all of which are numerical. The features V1, V2, ..., V28 are the result of a PCA transformation. To protect confidentiality, background information on these features is not available. The Time feature contains the time elapsed since the first transaction, and the Amount feature contains the transaction amount. The response variable, Class, is 1 in the case of fraud, and 0 otherwise. Project Introduction
The approaches for the project are :
Randomly split the dataset into train, validation, and test set. Do feature engineering. Predict and evaluate with validation set. Train on train set then predict and evaluate with validation set. Try other different models. Compare the difference between the predictions and choose the best model. Predict on test set to report final result.
I was able to accurately identify fraudulent transactions using a LogisticRegression model. Features V1, V2, ... V28 are the principal components obtained with PCA, the only features which have not been transformed with PCA are 'Time' and 'Amount'. Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. Feature 'Class' is the target variable with value 1 in case of fraud and 0 otherwise.To improve a particular model, I optimized hyperparameters via a grid search with 3-fold cross-validation
Salary PredictorLast Updated on May 3, 2021
This is a web app created using open source python library called Streamlit. This library is mainly used to create web apps for machine learning and data science. In this Project I collected data required from the
Kaggle. I Used Sklearn library to get the model required for the data and I fitted the data using in-built methods in it. So I created a web app which contain two pages named Home and Prediction. In home page I displayed the data collected and a scatter graph plotted using the matplotlib library with the help of data collected from Kaggle. In prediction page there will be a text filed where we can enter the experience of the employee and click the button which ultimately shows the precited salary for that employee. Stream lit Web app gives the output of a local host URL. So we have to deploy it globally. So I deployed the web app in Heroku platform. Here in this project I just downloaded a small data set to test how it works. So here a large data set can also be taken but the process will be different in training the model. For large datasets the data should be split to train and testing data so that we can train the model accurately and advanced algorithms to train the model is also used. So based on our convenience and requirements we can do machine learning models and save it into a file and this file can be used while creating a web app.
Air Quality Analysis And Prediction Of Italian CityLast Updated on May 3, 2021
- The value of CO in mg/m^3 reference value with respect to the available data. Please assume if you need, but do specify the same.
- The value pf CO in mg/m^3 for the next 3 3 weeks on hourly averaged concentration
Data Set Information
located on the field in a significantly polluted area, at road level,within an Italian city. Data were recorded from March 2004 to February 2005 (one year)representing the longest freely available recordings of on field deployed air quality chemical sensor devices responses. Ground Truth hourly averaged concentrations for CO, Non Metanic Hydrocarbons, Benzene, Total Nitrogen Oxides (NOx) and Nitrogen Dioxide (NO2) and were provided by a co-located reference certified analyzer. Evidences of cross-sensitivities as well as both concept and sensor drifts are present as described in De Vito et al., Sens. And Act. B, Vol. 129,2,2008 (citation required) eventually affecting sensors concentration
0 Date (DD/MM/YYYY)
1 Time (HH.MM.SS)
2 True hourly averaged concentration CO in mg/m^3 (reference analyzer)
3 PT08.S1 (tin oxide) hourly averaged sensor response (nominally CO targeted)
4 True hourly averaged overall Non Metanic HydroCarbons concentration in microg/m^3 (reference analyzer)
5 True hourly averaged Benzene concentration in microg/m^3 (reference analyzer)
6 PT08.S2 (titania) hourly averaged sensor response (nominally NMHC targeted)
7 True hourly averaged NOx concentration in ppb (reference analyzer)
8 PT08.S3 (tungsten oxide) hourly averaged sensor response (nominally NOx targeted)
9 True hourly averaged NO2 concentration in microg/m^3 (reference analyzer)
10 PT08.S4 (tungsten oxide) hourly averaged sensor response (nominally NO2 targeted)
11 PT08.S5 (indium oxide) hourly averaged sensor response (nominally O3 targeted)
12 Temperature in Â°C
13 Relative Humidity (%)
14 AH Absolute Humidity.