1265 lines
54 KiB
Plaintext
1265 lines
54 KiB
Plaintext
{
|
||
"cells": [
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "33c0bc6b-0506-4e17-9823-07a644d3093d",
|
||
"metadata": {},
|
||
"source": [
|
||
"{/* cspell:ignore workstreams workstream xvals yvals xval Arunachalam */}\n",
|
||
"\n",
|
||
"# Quantum Kernels\n",
|
||
"\n",
|
||
"## Introduction to quantum kernels\n",
|
||
"\n",
|
||
"The \"Quantum kernel method\" refers to any method that uses quantum computers to estimate a kernel. In this context, \"kernel\" will refer to the kernel matrix or individual entries therein. Recall that a feature mapping $\\Phi(\\vec{x})$ is a mapping from $\\vec{x}\\in \\mathbb{R}^d$ to $\\Phi(\\vec{x})\\in \\mathbb{R}^{d'},$ where usually $d'>d$ and where the goal of this mapping is to make the categories of data separable by a hyperplane. The kernel function takes vectors in the feature-mapped space as arguments and returns their inner product, i.e. $K:\\mathbb{R}^d\\times\\mathbb{R}^d\\rightarrow \\mathbb{R}$ with $K(x,y) = \\langle \\Phi(x)|\\Phi(y)\\rangle$. Classically, we are interested in feature maps for which the kernel function is easy to evaluate. This often means finding a kernel function for which the inner product in the feature-mapped space can be written in terms of the original data vectors, without having to ever construct $\\Phi(x)$ and $\\Phi(y)$. In the method of quantum kernels, the feature mapping is done by a quantum circuit, and the kernel is estimated using measurements on that circuit and the relative measurement probabilities.\n",
|
||
"\n",
|
||
"In this lesson we will examine the depths of pre-coded encoding circuits that use substantial entanglement and compare those to depths of circuits we code by hand. This is not to advocate for one method over another. You may find that pre-coded circuits are too deep, and that the entanglement in the custom-built circuit is insufficient to be useful. Again, these are shown only to enable your exploration.\n",
|
||
"\n",
|
||
"Before walking through a kernel matrix estimation in detail, let us outline the workflow using the language of Qiskit patterns.\n",
|
||
"\n",
|
||
"### Step 1: Map classical inputs to a quantum problem\n",
|
||
"\n",
|
||
"* Input: Training dataset\n",
|
||
"* Output: Abstract circuit for calculating a kernel matrix entry\n",
|
||
"\n",
|
||
"Given the dataset, the starting point is to encode the data into a quantum circuit. In other words, we need to map our data into the Hilbert space of states of our quantum computer. We do this by constructing a data-dependent circuit. There are many ways of doing this, and the previous lesson outlined a number of options. You can construct your own circuit to encode your data, or you can use a pre-made feature map like ZZFeatureMap. In this lesson, we wil do both.\n",
|
||
"\n",
|
||
"Note that in order to calculate a single kernel matrix element, we will want to encode two different points, so we can estimate their inner product. A full quantum kernel workflow will of course, involve many such inner products between mapped data vectors, as well as classical machine learning methods. But the core step being iterated is the estimation of a single kernel matrix element. For this we select a data-dependent quantum circuit and map two data vectors into the feature space.\n",
|
||
"\n",
|
||
"\n",
|
||
"\n",
|
||
"For the task of generating a kernel matrix, we are particularly interested in the probability of measuring the $|0\\rangle^{\\otimes N}$ state, in which all $N$ qubits are in the $|0\\rangle$ state. To see this, consider that the circuit responsible for encoding and mapping of one data vector $\\vec{x}_i$ can be written as $\\Phi(\\vec{x}_i)$, and the one responsible for encoding and mapping $\\vec{x}_j$ is $\\Phi(\\vec{x}_j)$, and denote the mapped states\n",
|
||
"$$\n",
|
||
"|\\psi(\\vec{x}_i)\\rangle = \\Phi(\\vec{x}_i)|0\\rangle^{\\otimes N}\n",
|
||
"$$\n",
|
||
"$$\n",
|
||
"|\\psi(\\vec{x}_j)\\rangle = \\Phi(\\vec{x}_j)|0\\rangle^{\\otimes N}.\n",
|
||
"$$\n",
|
||
"\n",
|
||
"These states _are_ the mapping of the data to higher dimensions, so our desired kernel entry is the inner product\n",
|
||
"$$\n",
|
||
"\\langle\\psi(\\vec{x}_j)|\\psi(\\vec{x}_i)\\rangle = \\langle 0 |^{\\otimes N}\\Phi^\\dagger(\\vec{x}_j)\\Phi(\\vec{x}_i)|0\\rangle^{\\otimes N}.\n",
|
||
"$$\n",
|
||
"If we operate on the default initial state $|0\\rangle^{\\otimes N}$ with both circuits $\\Phi^\\dagger(\\vec{x}_j)$ and $\\Phi(\\vec{x}_i)$, the probability of then measuring the state $|0\\rangle^{\\otimes N}$ is\n",
|
||
"$$\n",
|
||
"P_0 = |\\langle0|^{\\otimes N}\\Phi^\\dagger(\\vec{x}_j)\\Phi(\\vec{x}_i)|0\\rangle^{\\otimes N}|^2.\n",
|
||
"$$\n",
|
||
"This is exactly the value we want (up to $||^2$). The measurement layer of our circuit will return measurement probabilities (or so-called \"quasi-probabilities\", if certain error mitigation methods are used). The probability of interest is that of the zero state, $|0\\rangle^{\\otimes N}$.\n",
|
||
"\n",
|
||
"\n",
|
||
"### Step 2: Optimize problem for quantum execution\n",
|
||
"\n",
|
||
"* Input: Abstract circuit, not optimized for a particular backend\n",
|
||
"* Output: Target circuit and observable, optimized for the selected QPU\n",
|
||
"\n",
|
||
"In this step, we will use the `generate_preset_pass_manager` function from Qiskit to specify an optimization routine for our circuit with respect to the real quantum computer on which we plan to run the experiment. We set `optimization_level=3` , which means we will use the preset pass manager which provides the highest level of optimization. In this context, \"optimization\" refers to optimizing the implementation of the circuit on a real quantum computer. This includes considerations like selecting physical qubits to correspond to qubits in the abstract quantum circuit that will minimize gate depth, or selecting physical qubits with the lowest available error rates. This is not directly related to optimization of the machine learning problem (as in classical optimizers like COBYLA).\n",
|
||
"\n",
|
||
"Depending on how you implement step 2, you may have to optimize the circuit more than once, since each pair of points involved in a matrix element produce a different circuit to be measured.\n",
|
||
"\n",
|
||
"### Step 3: Execute using Qiskit Runtime Primitives\n",
|
||
"\n",
|
||
"* Input: Target circuit\n",
|
||
"* Output: Probability distribution\n",
|
||
"\n",
|
||
"Use the `Sampler` primitive from Qiskit Runtime to reconstruct a probability distribution of states yielded from sampling the circuit. Note that you may see this referred to as a \"quasi-probability distribution\", a term which is applicable where noise is an issue and when extra steps are introduced, such as in error mitigation. In such cases, the sum of all probabilities may not exactly equal 1; hence \"quasi-probability\".\n",
|
||
"\n",
|
||
"Since we optimized the circuit for the backend in Step 2, we can avoid doing transpilation on the Runtime server by setting `skip_transpilation=True` and passing the optimized circuit to the `Sampler`.\n",
|
||
"\n",
|
||
"### Step 4: Post-process, return result in classical format\n",
|
||
"\n",
|
||
"* Input: Probability distribution\n",
|
||
"* Output: A single kernel matrix element, or a kernel matrix if repeating\n",
|
||
"\n",
|
||
"Calculate the probability of measuring $|0\\rangle^{\\otimes N}$ on the quantum circuit, and populate the kernel matrix in the position corresponding to the two data vectors used. To fill out the entire kernel matrix, we need to run a quantum experiment for each entry. Once we have a kernel matrix, we can use it in many classical machine learning algorithms that accept `pre-calculated kernels`. For example: `qml_svc = SVC(kernel=\"precomputed\")`. We can then use classical workstreams to apply our model on our testing data, and get an accuracy score. Depending on our satisfaction with our accuracy score, we may need to revisit aspects of our calculation, such as our feature map.\n",
|
||
"\n",
|
||
"### Lesson outline\n",
|
||
"\n",
|
||
"In this lesson we will carry out these steps several ways to make optimal use of your time on real quantum computers. We will apply a quantum kernel method to\n",
|
||
"* A single kernel matrix entry for data with relatively few features, using a real backend, so that we can easily follow what is happening at each step.\n",
|
||
"* An entire data set with relatively few features, using a simulated backend, so that we can see how the quantum workstream connects with classical machine learning methods\n",
|
||
"* A single kernel matrix entry for data with many features, using a real quantum computer. We will not estimate an entire kernel matrix for a large dataset, in order to respect time on IBM quantum computers."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "f05e564c-ae91-4365-88fb-a57183c525aa",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Single kernel matrix entry\n",
|
||
"\n",
|
||
"### Step 1: Map classical inputs to a quantum problem\n",
|
||
"\n",
|
||
"Let us first consider a data set with just a few features, say 10. The data set could be as large as you like, since we are calculating the kernel matrix elements one at a time. We need at least two points, so we will start with that, and import a few needed packages:"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 1,
|
||
"id": "61f15d16-993a-4405-9454-685829cb08b4",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"import pandas as pd\n",
|
||
"import numpy as np\n",
|
||
"import matplotlib.pyplot as plt\n",
|
||
"\n",
|
||
"# Two mock data points, including category labels, as in training\n",
|
||
"small_data = [\n",
|
||
" [-0.194, 0.114, -0.006, 0.301, -0.359, -0.088, -0.156, 0.342, -0.016, 0.143, 1],\n",
|
||
" [-0.1, 0.002, 0.244, 0.127, -0.064, -0.086, 0.072, 0.043, -0.053, 0.02, -1],\n",
|
||
"]\n",
|
||
"\n",
|
||
"# Data points with labels removed, for inner product\n",
|
||
"train_data = [small_data[0][:-1], small_data[1][:-1]]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e9acb5ee-8c7a-467d-a7e9-e6219f4d443b",
|
||
"metadata": {},
|
||
"source": [
|
||
"In the next example, we will import a full dataset."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 2,
|
||
"id": "9d8e424e-751a-4057-a879-cc39d83af953",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# from qiskit.circuit.library import ZZFeatureMap\n",
|
||
"# fm = ZZFeatureMap(feature_dimension=np.shape(train_data)[1], entanglement='linear', reps=1)\n",
|
||
"\n",
|
||
"from qiskit.circuit.library import ZFeatureMap\n",
|
||
"\n",
|
||
"fm = ZFeatureMap(feature_dimension=np.shape(train_data)[1])\n",
|
||
"\n",
|
||
"\n",
|
||
"unitary1 = fm.assign_parameters(train_data[0])\n",
|
||
"unitary2 = fm.assign_parameters(train_data[1])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "2b41da49-ce41-45eb-83ff-cfcbaa8c2c92",
|
||
"metadata": {},
|
||
"source": [
|
||
"The two unitaries above exactly correspond to $U_1$ and $U_2$ described in the introduction. We can combine them using `UnitaryOverlap`. As always, we want to keep an eye on our circuit depth."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 3,
|
||
"id": "4273dcf7-784c-4edb-a9cd-9b249d24dbb8",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"circuit depth = 9\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"<Image src=\"/learning/images/courses/quantum-machine-learning/quantum-kernel-methods/extracted-outputs/4273dcf7-784c-4edb-a9cd-9b249d24dbb8-1.avif\" alt=\"Output of the previous code cell\" />"
|
||
]
|
||
},
|
||
"execution_count": 3,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"from qiskit.circuit.library import UnitaryOverlap\n",
|
||
"\n",
|
||
"\n",
|
||
"overlap_circ = UnitaryOverlap(unitary1, unitary2)\n",
|
||
"overlap_circ.measure_all()\n",
|
||
"\n",
|
||
"print(\"circuit depth = \", overlap_circ.decompose().depth())\n",
|
||
"overlap_circ.decompose().draw(\"mpl\", scale=0.6, style=\"iqp\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "0a9f9df0-d543-496a-99fd-8b66a53941a7",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Step 2: Optimize problem for quantum execution\n",
|
||
"\n",
|
||
"We start by selecting the least busy backend, then optimize our circuit for running on that backend."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 4,
|
||
"id": "27f62452-2534-4f97-a738-52e4a19a7772",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"<IBMBackend('ibm_sherbrooke')>\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Import needed packages\n",
|
||
"from qiskit.transpiler.preset_passmanagers import generate_preset_pass_manager\n",
|
||
"from qiskit_ibm_runtime import QiskitRuntimeService\n",
|
||
"\n",
|
||
"# Get the least busy backend\n",
|
||
"service = QiskitRuntimeService(channel=\"ibm_quantum\")\n",
|
||
"backend = service.least_busy(\n",
|
||
" operational=True, simulator=False, min_num_qubits=fm.num_qubits\n",
|
||
")\n",
|
||
"print(backend)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 5,
|
||
"id": "36cfd4e9-55c9-46c4-8afe-bd6cd0a40f45",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Apply level 3 optimization to our overlap circuit\n",
|
||
"pm = generate_preset_pass_manager(optimization_level=3, backend=backend)\n",
|
||
"overlap_ibm = pm.run(overlap_circ)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "13def807-7e1a-44c8-a7d8-2db5ce0f75ec",
|
||
"metadata": {},
|
||
"source": [
|
||
"For complicated circuits, this step will substantially increase the circuit depth as it maps to native gates for real quantum computers, and information may need to be moved from qubit to qubit. In this simple case, the depth is hardly affected at all."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 6,
|
||
"id": "79348ff5-4d6d-46d7-ae2b-e6ffbb22dd25",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"circuit depth = 10\n"
|
||
]
|
||
},
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"1"
|
||
]
|
||
},
|
||
"execution_count": 6,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"print(\"circuit depth = \", overlap_ibm.decompose().depth())\n",
|
||
"overlap_ibm.decompose().depth(lambda instr: len(instr.qubits) > 1)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "ec97b263-4037-4ed9-9588-110335d2c51d",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Step 3: Execute using Qiskit Runtime Primitives\n",
|
||
"\n",
|
||
"The syntax for running on a simulator is commented out below. For this dataset, with a small number of features, running on a simulator is still an option. For utility-scale calculations, simulation is not typically feasible. Simulators should only be used to debug scaled-down code."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "36a81e65-d14f-4dc2-b64c-4acbd5a185cd",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Run this for a simulator\n",
|
||
"# from qiskit.primitives import StatevectorSampler\n",
|
||
"\n",
|
||
"# from qiskit_ibm_runtime import Options, Session, Sampler\n",
|
||
"\n",
|
||
"# num_shots = 10000\n",
|
||
"\n",
|
||
"# Evaluate the problem using state vector-based primitives from Qiskit\n",
|
||
"# sampler = StatevectorSampler()\n",
|
||
"# results = sampler.run([overlap_circ], shots=num_shots).result()\n",
|
||
"# .get_counts() returns counts associated with a state labeled by bit results such as |001101...01>.\n",
|
||
"# counts_bit = results[0].data.meas.get_counts()\n",
|
||
"# .get_int_counts returns the same counts, but labeled by integer equivalent of the above bit string.\n",
|
||
"# counts = results[0].data.meas.get_int_counts()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "1ad869b2-ad50-4a82-bc4c-70fd0ab802c5",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Benchmarked on ibm_kyoto, 7-11-24, took 4 sec.\n",
|
||
"\n",
|
||
"# Import our runtime primitive\n",
|
||
"from qiskit_ibm_runtime import Session, SamplerV2 as Sampler\n",
|
||
"\n",
|
||
"num_shots = 10000\n",
|
||
"\n",
|
||
"# Use sampler and get the counts\n",
|
||
"\n",
|
||
"sampler = Sampler(mode=backend)\n",
|
||
"results = sampler.run([overlap_ibm], shots=num_shots).result()\n",
|
||
"# .get_counts() returns counts associated with a state labeled by bit results such as |001101...01>.\n",
|
||
"counts_bit = results[0].data.meas.get_counts()\n",
|
||
"# .get_int_counts returns the same counts, but labeled by integer equivalent of the above bit string.\n",
|
||
"counts = results[0].data.meas.get_int_counts()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "16d66085-047a-4718-80fe-eb64c20bd398",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Step 4: Post-process, return result in classical format\n",
|
||
"\n",
|
||
"As described in the introduction, the most useful measurement here is the probability of measuring the zero state $|00000\\rangle$."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 62,
|
||
"id": "33522847-21dc-48b8-9270-ce11fd1ec529",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"0.6525"
|
||
]
|
||
},
|
||
"execution_count": 62,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"counts.get(0, 0.0) / num_shots"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "65dfa3e4-bfe4-45d7-815f-e8c1aa24d916",
|
||
"metadata": {},
|
||
"source": [
|
||
"This is the outcome we wanted: an estimate of the inner product (up to mod squared) of the vectors corresponding to two data points. If we want to look at the full distribution of measurement probabilities (or quasiprobabilities), we can do so using the ```plot_distribution``` function as shown below. One sees that for a large number of qubits, pictures like this quickly become intractable."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 30,
|
||
"id": "29aaf5d6-ea7e-4373-b33c-9299699a3964",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"<Image src=\"/learning/images/courses/quantum-machine-learning/quantum-kernel-methods/extracted-outputs/29aaf5d6-ea7e-4373-b33c-9299699a3964-0.avif\" alt=\"Output of the previous code cell\" />"
|
||
]
|
||
},
|
||
"execution_count": 30,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"from qiskit.visualization import plot_distribution\n",
|
||
"\n",
|
||
"plot_distribution(counts_bit)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "d0a52344-34f2-498b-8f46-489d581bd8ba",
|
||
"metadata": {},
|
||
"source": [
|
||
"Alternatively, one might define a visualization like the one below to look only at the top 10 most probable measurements. This could be important for troubleshooting or trying to glean more intuition for the data. But the measurement probability of the zero state is our kernel matrix element."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 29,
|
||
"id": "c141e800-79eb-400a-9b8c-75e2393ce24e",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"<Image src=\"/learning/images/courses/quantum-machine-learning/quantum-kernel-methods/extracted-outputs/c141e800-79eb-400a-9b8c-75e2393ce24e-0.avif\" alt=\"Output of the previous code cell\" />"
|
||
]
|
||
},
|
||
"metadata": {},
|
||
"output_type": "display_data"
|
||
}
|
||
],
|
||
"source": [
|
||
"def visualize_counts(probs, num_qubits):\n",
|
||
" \"\"\"Visualize the outputs from the Qiskit Sampler primitive.\"\"\"\n",
|
||
" zero_prob = probs.get(0, 0.0)\n",
|
||
" top_10 = dict(sorted(probs.items(), key=lambda item: item[1], reverse=True)[:10])\n",
|
||
" top_10.update({0: zero_prob})\n",
|
||
" by_key = dict(sorted(top_10.items(), key=lambda item: item[0]))\n",
|
||
" xvals, yvals = list(zip(*by_key.items()))\n",
|
||
" xvals = [bin(xval)[2:].zfill(num_qubits) for xval in xvals]\n",
|
||
" plt.bar(xvals, yvals)\n",
|
||
" plt.xticks(rotation=75)\n",
|
||
" plt.title(\"Results of sampling\")\n",
|
||
" plt.xlabel(\"Measured bitstring\")\n",
|
||
" plt.ylabel(\"Counts\")\n",
|
||
" plt.show()\n",
|
||
"\n",
|
||
"\n",
|
||
"visualize_counts(counts, overlap_circ.num_qubits)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "47e85d94-ee94-4ed9-8223-ccd5fc9442d4",
|
||
"metadata": {},
|
||
"source": [
|
||
"From this information about only one inner product between two data points in the higher dimensional feature space, all we can say is that their overlap is fairly large compared to the maximum overlap (which would be 1.0). This could be an indicator that these two data points are somehow similar in nature and will be categorized in the same class classes. Or it could be an indicator that our feature map is not effective at mapping into a space where like data has a strong overlap and unlike data has a small overlap. In order to know which is true, we must apply our feature map to the entire set of data and see if the resulting kernel matrix can be manipulated to effectively separate classes with high accuracy.\n",
|
||
"\n",
|
||
"It is worth noting that we used the ```ZFeatureMap``` which resulted in low two-qubit transpiled depth (depth 1, in fact). If your circuits become too deep, it is sure to result in a lot of noise, and this will make the probability of measuring the zero state very low, even if your feature map is well-matched to your data. For example, a repetition of the above process using ```ZZFeatureMap``` and ```, entanglement='linear', reps=1``` yielded ```dist.get(0,0.0) = 0.0015``` using the same data points. This is due to the much greater circuit depths and two-qubit depths from ```ZZFeatureMap```. The figure below shows the probability distribution for that calculation.\n",
|
||
"\n",
|
||
"\n",
|
||
"\n",
|
||
"It is worth playing around with a few data points from the same category to see how low your depth must be to obtain good results. The following is rough advice that is sure to have exceptions. Generally, a two-qubit, transpiled depth of 10 or fewer should be no problem. A two-qubit, transpiled depth of 50-60 is state-of-the-art and will require advanced error mitigation among other tools. In between, your results may vary with data similarity, feature map expressivity, circuit width, and other factors."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "33eda2a4-cd97-4bba-8acd-4017bfe2cd12",
|
||
"metadata": {},
|
||
"source": [
|
||
"Ordinarily the post-processing step would also include classical machine learning processes. In the next section we will extend this process to an entire dataset, and show the classical machine learning workflow."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "41850d1d-3800-4aa0-aa01-fe6e72c872be",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Check-in Questions\n",
|
||
"\n",
|
||
"__Q:__ In a 10-qubit quantum circuit, generally, how many different states are there that could possibly be measured?\n",
|
||
"\n",
|
||
"__Answer:__\n",
|
||
"\n",
|
||
"$10^{10}$ or 10 billion.\n",
|
||
"\n",
|
||
"__Q:__ Let us suppose that someone new to quantum computing attempts to use a quantum circuit that has very high two-qubit depth, and they do not use error mitigation. Let us further suppose that this results in an error rate of 10% on each qubit. If the true (error-free) kernel matrix element corresponding to this circuit is very large, say 1.0, what would be the probability of measuring all 10 qubits to be in the state with every qubit |0>?\n",
|
||
"\n",
|
||
"__Answer:__\n",
|
||
"\n",
|
||
"The probability of each qubit being correctly found in the |0> state is 0.90. The probability for all 10 qubits to be found in the correct state is $0.90^10$ or about 35%.\n",
|
||
"\n",
|
||
"__Q:__ Explain in your own words why it is so important to monitor circuit depths. This is true generally, but explain it in the context of quantum kernel estimation.\n",
|
||
"\n",
|
||
"__Answer:__\n",
|
||
"\n",
|
||
"In this QKE workflow, our estimates are based on the measurements of the zero state, meaning the state in which every qubit is found in the $|0\\rangle$ state. Very deep circuits will introduce high error rates. When that error rate is compounded over many qubits, this will reduce the probability of measuring the zero state, substantially."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "0fde5aca-0d54-40b1-82d4-9d1b2a1b67bb",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Full kernel matrix\n",
|
||
"\n",
|
||
"In this section, we will extend the above process to the binary classification of a full dataset. This will introduce two important components: (1) we can now implement classical machine learning in post-processing, and (2) we can obtain accuracy scores for our training.\n",
|
||
"\n",
|
||
"### Step 1: Map classical inputs to a quantum problem\n",
|
||
"\n",
|
||
"Now we will import an existing dataset for our classification. This dataset consists of 128 rows (data points) and 14 features on each point. There is a 15th element that indicates the binary category of each point ($\\pm 1$). The dataset is imported below, or you can access the dataset and view its structure [here](https://github.com/qiskit-community/prototype-quantum-kernel-training/blob/main/data/dataset_graph7.csv).\n",
|
||
"\n",
|
||
"We will use the first 90 data points for training, and the next 30 points for testing."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 15,
|
||
"id": "60f2699d-ef47-44f7-931a-4a70d4363c0e",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"--2024-07-11 23:05:22-- https://raw.githubusercontent.com/qiskit-community/prototype-quantum-kernel-training/main/data/dataset_graph7.csv\n",
|
||
"Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.111.133, 185.199.109.133, ...\n",
|
||
"Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.\n",
|
||
"HTTP request sent, awaiting response... 200 OK\n",
|
||
"Length: 49405 (48K) [text/plain]\n",
|
||
"Saving to: ‘dataset_graph7.csv.15’\n",
|
||
"\n",
|
||
"dataset_graph7.csv. 100%[===================>] 48.25K --.-KB/s in 0.02s \n",
|
||
"\n",
|
||
"2024-07-11 23:05:23 (2.11 MB/s) - ‘dataset_graph7.csv.15’ saved [49405/49405]\n",
|
||
"\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"!wget https://raw.githubusercontent.com/qiskit-community/prototype-quantum-kernel-training/main/data/dataset_graph7.csv\n",
|
||
"\n",
|
||
"df = pd.read_csv(\"dataset_graph7.csv\", sep=\",\", header=None)\n",
|
||
"\n",
|
||
"# Prepare training data\n",
|
||
"\n",
|
||
"train_size = 90\n",
|
||
"X_train = df.values[0:train_size, :-1]\n",
|
||
"train_labels = df.values[0:train_size, -1]\n",
|
||
"\n",
|
||
"# Prepare testing data\n",
|
||
"test_size = 30\n",
|
||
"X_test = df.values[train_size : train_size + test_size, :-1]\n",
|
||
"test_labels = df.values[train_size : train_size + test_size, -1]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "4a665fcb-bd27-4713-9c0a-b7747de5fb45",
|
||
"metadata": {},
|
||
"source": [
|
||
"We will already prepare for storing multiple outputs by constructing a kernel matrix and a test matrix of appropriate dimensions."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 16,
|
||
"id": "34d92041-0501-4c20-b780-ae5352510637",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Empty kernel matrix\n",
|
||
"num_samples = np.shape(X_train)[0]\n",
|
||
"kernel_matrix = np.full((num_samples, num_samples), np.nan)\n",
|
||
"test_matrix = np.full((test_size, num_samples), np.nan)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "920233dd-cfac-498d-be88-746719083cc2",
|
||
"metadata": {},
|
||
"source": [
|
||
"Now we create a feature map for encoding and mapping our classical data in a quantum circuit. We are free to construct our own feature map or use a pre-fabricated one. Feel free to modify the feature map below, or switch back to ZFeatureMap. But always pay attention to circuit depth. Recall that in the previous 6-qubit example the transpiled circuit depth was intractably high when using ZZFeatureMap. As the scale and complexity of the circuit increase, the depth could rapidly increase to a point where noise overwhelms our results. Whenever you know something about your data structure that may inform what feature map structure would be most useful, it is advisable to create your own custom feature map that leverages that knowledge."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 17,
|
||
"id": "ca813500-5856-4b50-a213-a7e87d825c18",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from qiskit.circuit import Parameter, ParameterVector, QuantumCircuit\n",
|
||
"\n",
|
||
"# Prepare feature map for computing overlap\n",
|
||
"num_features = np.shape(X_train)[1]\n",
|
||
"num_qubits = int(num_features / 2)\n",
|
||
"\n",
|
||
"# To use a custom feature map use the lines below.\n",
|
||
"entangler_map = [[0, 2], [3, 4], [2, 5], [1, 4], [2, 3], [4, 6]]\n",
|
||
"\n",
|
||
"fm = QuantumCircuit(num_qubits)\n",
|
||
"training_param = Parameter(\"θ\")\n",
|
||
"feature_params = ParameterVector(\"x\", num_qubits * 2)\n",
|
||
"fm.ry(training_param, fm.qubits)\n",
|
||
"for cz in entangler_map:\n",
|
||
" fm.cz(cz[0], cz[1])\n",
|
||
"for i in range(num_qubits):\n",
|
||
" fm.rz(-2 * feature_params[2 * i + 1], i)\n",
|
||
" fm.rx(-2 * feature_params[2 * i], i)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "3c6416b5-9eb1-40a4-b72c-8912df931139",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Step 2 & 3: Optimize problem & execute using primitives\n",
|
||
"\n",
|
||
"We will construct an overlap circuit, and if we were running on a real quantum computer in this example, we would optimize it for execution as before. But in this case, we intend to step over all data points and calculate the full kernel matrix. For each pair of data vectors $\\vec{x}_i$ and $\\vec{x}_j$, we create a different overlap circuit. Thus we must optimize our circuit for each data point pair. So steps 2 & 3 would be done together in the multiple iterations.\n",
|
||
"\n",
|
||
"The code cell below does exactly the same process as before for a single data point pair. This time it is simply executed inside two `for` loops, and there is the additional line at the end `kernel_matrix[x_1,x_2] = ...` to store the results of each calculation. Note that we have leveraged the symmetry of a kernel matrix to reduce the number of calculations by 1/2. We have also simply set the diagonal elements to 1, as they should be in the absence of noise. Depending on your implementation and required precision, you could also use the diagonal elements to estimate noise or learn about it for error mitigation purposes.\n",
|
||
"\n",
|
||
"Once the kernel matrix has been fully populated, we repeat the process for the test data and populate the test_matrix. This is really also a kernel matrix; we simply give it a different name to distinguish the two."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "7995748f-fc97-4eda-bed6-8b189c1e65ea",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"training done\n",
|
||
"test matrix done\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# To use a simulator\n",
|
||
"from qiskit.primitives import StatevectorSampler\n",
|
||
"\n",
|
||
"# Remember to insert your token in the QiskitRuntimeService constructor to use real quantum computers\n",
|
||
"# service = QiskitRuntimeService(channel=\"ibm_quantum\")\n",
|
||
"# backend = service.least_busy(\n",
|
||
"# operational=True, simulator=False, min_num_qubits=fm.num_qubits\n",
|
||
"# )\n",
|
||
"\n",
|
||
"num_shots = 10000\n",
|
||
"\n",
|
||
"# Evaluate the problem using state vector-based primitives from Qiskit.\n",
|
||
"sampler = StatevectorSampler()\n",
|
||
"\n",
|
||
"for x1 in range(0, train_size):\n",
|
||
" for x2 in range(x1 + 1, train_size):\n",
|
||
" unitary1 = fm.assign_parameters(list(X_train[x1]) + [np.pi / 2])\n",
|
||
" unitary2 = fm.assign_parameters(list(X_train[x2]) + [np.pi / 2])\n",
|
||
"\n",
|
||
" # Create the overlap circuit\n",
|
||
" overlap_circ = UnitaryOverlap(unitary1, unitary2)\n",
|
||
" overlap_circ.measure_all()\n",
|
||
"\n",
|
||
" # These lines run the qiskit sampler primitive.\n",
|
||
" counts = (\n",
|
||
" sampler.run([overlap_circ], shots=num_shots)\n",
|
||
" .result()[0]\n",
|
||
" .data.meas.get_int_counts()\n",
|
||
" )\n",
|
||
"\n",
|
||
" # Assign the probability of the 0 state to the kernel matrix, and the transposed element (since this is an inner product)\n",
|
||
" kernel_matrix[x1, x2] = counts.get(0, 0.0) / num_shots\n",
|
||
" kernel_matrix[x2, x1] = counts.get(0, 0.0) / num_shots\n",
|
||
" # Fill in on-diagonal elements with 1, again, since this is an inner-product corresponding to probability (or alter the code to check these entries and verify they yield 1)\n",
|
||
" kernel_matrix[x1, x1] = 1\n",
|
||
"\n",
|
||
"print(\"training done\")\n",
|
||
"\n",
|
||
"# Similar process to above, but for testing data.\n",
|
||
"for x1 in range(0, test_size):\n",
|
||
" for x2 in range(0, train_size):\n",
|
||
" unitary1 = fm.assign_parameters(list(X_test[x1]) + [np.pi / 2])\n",
|
||
" unitary2 = fm.assign_parameters(list(X_train[x2]) + [np.pi / 2])\n",
|
||
"\n",
|
||
" # Create the overlap circuit\n",
|
||
" overlap_circ = UnitaryOverlap(unitary1, unitary2)\n",
|
||
" overlap_circ.measure_all()\n",
|
||
"\n",
|
||
" counts = (\n",
|
||
" sampler.run([overlap_circ], shots=num_shots)\n",
|
||
" .result()[0]\n",
|
||
" .data.meas.get_int_counts()\n",
|
||
" )\n",
|
||
"\n",
|
||
" test_matrix[x1, x2] = counts.get(0, 0.0) / num_shots\n",
|
||
"\n",
|
||
"print(\"test matrix done\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "ea65cd41-4ba3-423c-a98f-2f8e3751adf4",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Step 4: Post-process, return result in classical format\n",
|
||
"\n",
|
||
"Now that we have a kernel matrix and a similarly formatted test_matrix from quantum kernel methods, we can apply classical machine learning algorithms to make predictions about our test data and check its accuracy. We will start by importing Scikit-Learn's `sklearn.svc`, a support vector classifier (SVC). We must specify that we want the SVC to use our precomputed kernel using `kernel = precomputed`."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 19,
|
||
"id": "78d9e291-54c3-4c45-827e-066c6d012da5",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# import a support vector classifier from a classical ML package.\n",
|
||
"from sklearn.svm import SVC\n",
|
||
"\n",
|
||
"# Specify that you want to use a pre-computed kernel matrix\n",
|
||
"qml_svc = SVC(kernel=\"precomputed\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "9bd69e5f-dafd-492c-bd5d-f5e16925f611",
|
||
"metadata": {},
|
||
"source": [
|
||
"Using `SVC.fit`, we can now feed in the kernel matrix and the training labels to obtain a fit. `SVC.score` will then score our test data against that fit using our test_matrix, and return our accuracy."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 20,
|
||
"id": "d8708973-c97c-494c-af9c-e4a0fe430452",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"Precomputed kernel classification test score: 1.0\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Feed in the pre-computed matrix and the labels of the training data. The classical algorithm gives you a fit.\n",
|
||
"qml_svc.fit(kernel_matrix, train_labels)\n",
|
||
"\n",
|
||
"# Now use the .score to test your data, using the matrix of test data, and test labels as your inputs.\n",
|
||
"qml_score_precomputed_kernel = qml_svc.score(test_matrix, test_labels)\n",
|
||
"print(f\"Precomputed kernel classification test score: {qml_score_precomputed_kernel}\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "0200dceb-6d61-4e16-8c3d-d08dbc5529e5",
|
||
"metadata": {},
|
||
"source": [
|
||
"We see that the accuracy of our trained model was 100%. This is great, and it shows that QKE can work. But that is very different from quantum advantage. Classical kernels would likely have been able to solve this classification problem with 100% accuracy as well. There is much work to be done characterizing different data types and data relationships to see where quantum kernels will be most useful in the current utility era.\n",
|
||
"We leave it to the learner to modify parts of this workflow and study the effectiveness of various quantum feature maps. Here are a few things to consider:\n",
|
||
"* How robust is the accuracy? Does it hold for broad types of data or just this specific training data?\n",
|
||
"* What structure in your data makes you suspect that a quantum feature map is useful?\n",
|
||
"* How is the accuracy affected by increasing/decreasing the amount of training data?\n",
|
||
"* What feature maps can you use and how do the results vary with feature maps?\n",
|
||
"* How are the accuracy and running time affected by increasing the number of features?\n",
|
||
"* Which trends, if any, do you expect to hold on real quantum computers?"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "da2427c4-a1a1-4284-b10f-95d5c5230755",
|
||
"metadata": {},
|
||
"source": [
|
||
"## Scaling to more features and qubits\n",
|
||
"\n",
|
||
"In this section, we will repeat the calculation of a single matrix element, but for a much larger number of features, sketching the path to scale toward utility. The restriction to a single matrix element is done so that the process can be shown without using up too much of your allotted time on quantum computers.\n",
|
||
"\n",
|
||
"### Step 1: Map classical inputs to a quantum problem\n",
|
||
"\n",
|
||
"We will assume a starting point of a dataset in which each data point has 42 features. As in the first example, we will calculate a single kernel matrix element, requiring two data points. The two points below have 42 features and a single category variable ($\\pm 1$)."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 79,
|
||
"id": "014ba70a-5202-4c86-b886-7ff27ae9a1b3",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Two mock data points, including category labels, as in training\n",
|
||
"\n",
|
||
"large_data = [\n",
|
||
" [\n",
|
||
" -0.028,\n",
|
||
" -1.49,\n",
|
||
" -1.698,\n",
|
||
" 0.107,\n",
|
||
" -1.536,\n",
|
||
" -1.538,\n",
|
||
" -1.356,\n",
|
||
" -1.514,\n",
|
||
" -0.109,\n",
|
||
" -1.8,\n",
|
||
" -0.122,\n",
|
||
" -1.651,\n",
|
||
" -1.955,\n",
|
||
" -0.123,\n",
|
||
" -1.732,\n",
|
||
" 0.091,\n",
|
||
" -0.048,\n",
|
||
" -0.128,\n",
|
||
" -0.026,\n",
|
||
" 0.082,\n",
|
||
" -1.263,\n",
|
||
" 0.065,\n",
|
||
" 0.004,\n",
|
||
" -0.055,\n",
|
||
" -0.08,\n",
|
||
" -0.173,\n",
|
||
" -1.734,\n",
|
||
" -0.39,\n",
|
||
" -1.451,\n",
|
||
" 0.078,\n",
|
||
" -1.578,\n",
|
||
" -0.025,\n",
|
||
" -0.184,\n",
|
||
" -0.119,\n",
|
||
" -1.336,\n",
|
||
" 0.055,\n",
|
||
" -0.204,\n",
|
||
" -1.578,\n",
|
||
" 0.132,\n",
|
||
" -0.121,\n",
|
||
" -1.599,\n",
|
||
" -0.187,\n",
|
||
" -1,\n",
|
||
" ],\n",
|
||
" [\n",
|
||
" -1.414,\n",
|
||
" -1.439,\n",
|
||
" -1.606,\n",
|
||
" 0.246,\n",
|
||
" -1.673,\n",
|
||
" 0.002,\n",
|
||
" -1.317,\n",
|
||
" -1.262,\n",
|
||
" -0.178,\n",
|
||
" -1.814,\n",
|
||
" 0.013,\n",
|
||
" -1.619,\n",
|
||
" -1.86,\n",
|
||
" -0.25,\n",
|
||
" -0.212,\n",
|
||
" -0.214,\n",
|
||
" -0.033,\n",
|
||
" 0.071,\n",
|
||
" -0.11,\n",
|
||
" -1.607,\n",
|
||
" 0.441,\n",
|
||
" -0.143,\n",
|
||
" -0.009,\n",
|
||
" -1.655,\n",
|
||
" -1.579,\n",
|
||
" 0.381,\n",
|
||
" -1.86,\n",
|
||
" -0.079,\n",
|
||
" -0.088,\n",
|
||
" -0.058,\n",
|
||
" -1.481,\n",
|
||
" -0.064,\n",
|
||
" -0.065,\n",
|
||
" -1.507,\n",
|
||
" 0.177,\n",
|
||
" -0.131,\n",
|
||
" -0.153,\n",
|
||
" 0.07,\n",
|
||
" -1.627,\n",
|
||
" 0.593,\n",
|
||
" -1.547,\n",
|
||
" -0.16,\n",
|
||
" -1,\n",
|
||
" ],\n",
|
||
"]\n",
|
||
"train_data = [large_data[0][:-1], large_data[1][:-1]]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "3e4dd6a7-e51c-4dc2-99ea-66e2746e411d",
|
||
"metadata": {},
|
||
"source": [
|
||
"Recall that the ZZFeatureMap produced rather deep circuits in the case of relatively few features (14 features). As we increase the number of features, we need to closely monitor circuit depth. To illustrate this, we will first try using the ZZFeatureMap and check the depth of the resulting circuit."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 83,
|
||
"id": "bdb1ecc4-7827-4eb7-bf7b-5a09838775ca",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from qiskit.circuit.library import ZZFeatureMap\n",
|
||
"\n",
|
||
"fm = ZZFeatureMap(\n",
|
||
" feature_dimension=np.shape(train_data)[1], entanglement=\"linear\", reps=1\n",
|
||
")\n",
|
||
"\n",
|
||
"unitary1 = fm.assign_parameters(train_data[0])\n",
|
||
"unitary2 = fm.assign_parameters(train_data[1])"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 84,
|
||
"id": "5bc865cb-8f8b-4812-9a05-1bd34dbcbcb9",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"circuit depth = 251\n",
|
||
"two-qubit depth 165\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"from qiskit.circuit.library import UnitaryOverlap\n",
|
||
"\n",
|
||
"\n",
|
||
"overlap_circ = UnitaryOverlap(unitary1, unitary2)\n",
|
||
"overlap_circ.measure_all()\n",
|
||
"\n",
|
||
"print(\"circuit depth = \", overlap_circ.decompose(reps=2).depth())\n",
|
||
"print(\n",
|
||
" \"two-qubit depth\",\n",
|
||
" overlap_circ.decompose().depth(lambda instr: len(instr.qubits) > 1),\n",
|
||
")\n",
|
||
"# overlap_circ.draw(\"mpl\", scale=0.6, style=\"iqp\")"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "b3348ab6-ef4e-40d2-b803-e86484e0b8f3",
|
||
"metadata": {},
|
||
"source": [
|
||
"As described before, determining exactly how deep is too deep is nuanced. But a two-qubit depth of more than 100, even before transpilation is a non-starter. This is why custom feature maps have been emphasized throughout this lesson. If you know something about the structure of your entire dataset, you should design an entanglement map with that structure in mind. Here, since we are only calculating the inner product between two such data points, we have prioritized low circuit depth over any detailed consideration of data structure."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 95,
|
||
"id": "784f8a68-0d77-433b-8482-bac38ddb8adc",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from qiskit.circuit import Parameter, ParameterVector, QuantumCircuit\n",
|
||
"\n",
|
||
"# Prepare feature map for computing overlap\n",
|
||
"\n",
|
||
"entangler_map = [\n",
|
||
" [3, 4],\n",
|
||
" [2, 5],\n",
|
||
" [1, 4],\n",
|
||
" [2, 3],\n",
|
||
" [4, 6],\n",
|
||
" [7, 9],\n",
|
||
" [10, 11],\n",
|
||
" [9, 12],\n",
|
||
" [8, 11],\n",
|
||
" [9, 10],\n",
|
||
" [11, 13],\n",
|
||
" [14, 16],\n",
|
||
" [17, 18],\n",
|
||
" [16, 19],\n",
|
||
" [15, 18],\n",
|
||
" [16, 17],\n",
|
||
" [18, 20],\n",
|
||
"]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 96,
|
||
"id": "587daff1-7c5b-4505-8064-827c1fbdc742",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Use the entangler map above to build a feature map\n",
|
||
"\n",
|
||
"num_features = np.shape(train_data)[1]\n",
|
||
"num_qubits = int(num_features / 2)\n",
|
||
"\n",
|
||
"fm = QuantumCircuit(num_qubits)\n",
|
||
"training_param = Parameter(\"θ\")\n",
|
||
"feature_params = ParameterVector(\"x\", num_qubits * 2)\n",
|
||
"fm.ry(training_param, fm.qubits)\n",
|
||
"for cz in entangler_map:\n",
|
||
" fm.cz(cz[0], cz[1])\n",
|
||
"for i in range(num_qubits):\n",
|
||
" fm.rz(-2 * feature_params[2 * i + 1], i)\n",
|
||
" fm.rx(-2 * feature_params[2 * i], i)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": null,
|
||
"id": "02576ae9-8f24-4f47-8c37-ae0c3f710d4c",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"from qiskit.circuit.library import UnitaryOverlap\n",
|
||
"\n",
|
||
"# Assign features of each data point to a unitary, an instance of the general feature map.\n",
|
||
"\n",
|
||
"unitary1 = fm.assign_parameters(list(train_data[0]) + [np.pi / 2])\n",
|
||
"unitary2 = fm.assign_parameters(list(train_data[1]) + [np.pi / 2])\n",
|
||
"\n",
|
||
"# Create the overlap circuit\n",
|
||
"\n",
|
||
"overlap_circ = UnitaryOverlap(unitary1, unitary2)\n",
|
||
"overlap_circ.measure_all()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "fbe9ba90-6463-488d-b6f2-e9d4ee8f0871",
|
||
"metadata": {},
|
||
"source": [
|
||
"We won't bother checking the depths yet, since what really matters is the transpiled two-qubit depth."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "9a5c14fc-aa58-461f-a1d5-fc644e042ecc",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Step 2: Optimize problem for quantum execution\n",
|
||
"\n",
|
||
"We start by selecting the least busy backend, then optimize our circuit for running on that backend."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 98,
|
||
"id": "99e3a8c7-9aba-4890-a492-cb3779b5de03",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"<IBMBackend('ibm_nazca')>\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Import needed packages\n",
|
||
"from qiskit.transpiler.preset_passmanagers import generate_preset_pass_manager\n",
|
||
"from qiskit_ibm_runtime import QiskitRuntimeService\n",
|
||
"\n",
|
||
"# Get the least busy backend\n",
|
||
"service = QiskitRuntimeService(channel=\"ibm_quantum\")\n",
|
||
"backend = service.least_busy(\n",
|
||
" operational=True, simulator=False, min_num_qubits=fm.num_qubits\n",
|
||
")\n",
|
||
"print(backend)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "0b524533-7422-4d91-a3d1-82dfdeaf1588",
|
||
"metadata": {},
|
||
"source": [
|
||
"On small-scale jobs, a preset pass manager will often return the same circuit with the same depth, reliably. But in very large, complex circuits the pass manager can return different transpiled circuits each time it runs. This is because it is using heuristics, and because very large circuits will have a complicated landscape of possible optimizations. It is often useful to transpile a few times and take the shallowest circuit. This only introduces classical overhead and may substantially improve the results from the quantum computer.\n",
|
||
"\n",
|
||
"Here, we transpile the unitary overlap circuit 20 times, and look at the depths of the circuits obtained."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 99,
|
||
"id": "f5f8f833-8ac2-4df0-8268-4d83f841fd16",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"circuit depth = 61\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"# Apply level 3 optimization to our overlap circuit\n",
|
||
"transpiled_qcs = []\n",
|
||
"transpiled_depths = []\n",
|
||
"transpiled_twoqubit_depths = []\n",
|
||
"for i in range(1, 20):\n",
|
||
" pm = generate_preset_pass_manager(optimization_level=3, backend=backend)\n",
|
||
" overlap_ibm = pm.run(overlap_circ)\n",
|
||
" transpiled_qcs.append(overlap_ibm)\n",
|
||
" transpiled_depths.append(overlap_ibm.decompose().depth())\n",
|
||
" transpiled_twoqubit_depths.append(\n",
|
||
" overlap_ibm.decompose().depth(lambda instr: len(instr.qubits) > 1)\n",
|
||
" )\n",
|
||
"\n",
|
||
"print(\"circuit depth = \", overlap_ibm.decompose().depth())"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 100,
|
||
"id": "fd604f87-32d1-42dd-a1fb-f2315cb4dfdc",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"name": "stdout",
|
||
"output_type": "stream",
|
||
"text": [
|
||
"[61, 60, 60, 69, 60, 60, 60, 65, 60, 60, 69, 61, 77, 77, 65, 60, 60, 77, 61]\n",
|
||
"[13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13]\n"
|
||
]
|
||
}
|
||
],
|
||
"source": [
|
||
"print(transpiled_depths)\n",
|
||
"print(transpiled_twoqubit_depths)"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "30fad0d8-f5e4-4dcf-946a-956f23334e9c",
|
||
"metadata": {},
|
||
"source": [
|
||
"Here you can see that there is some variation in the total gate depth with different transpilation passes. Our circuit is not yet deep/wide enough to see variation in the two-qubit transpiled depths. We will use the `transpiled_qcs[1]`, which has a depth of 60, just slightly lower than the depth of the deepest circuit obtained, which was 77."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 101,
|
||
"id": "28f00388-0e6e-41db-a0b4-9200d58ba5cb",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"overlap_ibm = transpiled_qcs[1]"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "e96cee51-e79f-4eab-829b-397645bc1080",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Step 3: Execute using Qiskit Runtime Primitives\n",
|
||
"\n",
|
||
"As we scale closer to utility, simulators will not be useful. Only the syntax for real quantum computers is shown here."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 102,
|
||
"id": "7eedcf5f-5ead-4eb0-b423-669901f572ba",
|
||
"metadata": {},
|
||
"outputs": [],
|
||
"source": [
|
||
"# Run on ibm_osaka, 7-12-24, required 22 sec.\n",
|
||
"\n",
|
||
"# Import our runtime primitive\n",
|
||
"from qiskit_ibm_runtime import SamplerV2 as Sampler\n",
|
||
"\n",
|
||
"# Open a Runtime session:\n",
|
||
"session = Session(backend=backend)\n",
|
||
"num_shots = 10000\n",
|
||
"# Use sampler and get the counts\n",
|
||
"\n",
|
||
"sampler = Sampler(mode=session)\n",
|
||
"options = sampler.options\n",
|
||
"options.dynamical_decoupling.enable = True\n",
|
||
"options.twirling.enable_gates = True\n",
|
||
"counts = (\n",
|
||
" sampler.run([overlap_ibm], shots=num_shots).result()[0].data.meas.get_int_counts()\n",
|
||
")\n",
|
||
"\n",
|
||
"# Close session after done\n",
|
||
"session.close()"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "f40540b8-146d-4165-af12-db902232300d",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Step 4: Post-process, return result in classical format\n",
|
||
"\n",
|
||
"As described in the introduction, the most useful measurement here is the probability of measuring the zero state $|00000\\rangle$."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "code",
|
||
"execution_count": 103,
|
||
"id": "ce9f4dc1-b562-437e-b9fe-5a1781b67bfe",
|
||
"metadata": {},
|
||
"outputs": [
|
||
{
|
||
"data": {
|
||
"text/plain": [
|
||
"0.0138"
|
||
]
|
||
},
|
||
"execution_count": 103,
|
||
"metadata": {},
|
||
"output_type": "execute_result"
|
||
}
|
||
],
|
||
"source": [
|
||
"counts.get(0, 0.0) / num_shots"
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "9b6c78b4-6788-404f-bd90-d98d7b288d42",
|
||
"metadata": {},
|
||
"source": [
|
||
"This process for the single kernel matrix element could be repeated between other data pairs in your set to obtain the full kernel matrix. The dimension of the kernel matrix is dictated by the number of points in your training data, not the number of features. So the computing cost of manipulating the kernel matrix into a predictive model does not scale like the number of features or qubits. Even for relatively small datasets with large numbers of features, the data would still need to be matched to a feature map that yields effective classification.\n",
|
||
"\n",
|
||
"### Scaling and future work\n",
|
||
"\n",
|
||
"The kernel method requires that we measure the $|0\\rangle$ as accurately as possible. But gate errors and readout errors mean that there is some none zero probability $p$ that any given qubit will be erroneously measured to be in the $|1\\rangle$ state. Even with the oversimplification that the probability of $|0\\rangle$ should be $100\\%$, for many features encoded on, say, $N$ bits, the probability of correctly measuring all bits to be $|0\\rangle$ is reduced to $(1-p)^N$. As $N$ becomes large, this method becomes less and less reliable. Overcoming this difficulty and scaling kernel estimation to more and more features is an area of current research. To learn more about this issue, see this work by [Thanasilp, Wang, Cerezo, and Holmes](https://www.nature.com/articles/s41467-024-49287-w). We recommend you explore what can be done with current quantum computers, and also look forward to what will be possible in the era of error correction."
|
||
]
|
||
},
|
||
{
|
||
"cell_type": "markdown",
|
||
"id": "556eb119-2b22-4bd9-b82e-c3a6ea31cfa1",
|
||
"metadata": {},
|
||
"source": [
|
||
"### Review\n",
|
||
"\n",
|
||
"Calculating a quantum kernel involves\n",
|
||
"* calculating kernel matrix entries, using pairs of training data points\n",
|
||
"* encoding the data and mapping it via a feature mapping\n",
|
||
"* optimizing your circuit for running on real quantum computers / backends\n",
|
||
"\n",
|
||
"The quantum kernel can then be used in classical machine learning algorithms, as in this lesson.\n",
|
||
"\n",
|
||
"Some key things to keep in mind when using quantum kernels include:\n",
|
||
"* Is the dataset likely to benefit from quantum kernel methods?\n",
|
||
"* Try different feature maps and entanglement schemes.\n",
|
||
"* Is the circuit depth acceptable?\n",
|
||
"* Try running a pass manager multiple times and use the smallest-depth circuit you can get.\n",
|
||
"\n",
|
||
"Quantum kernel methods are potentially powerful tools given a proper match between datasets with quantum-amenable features, and a suitable quantum feature map. To better understand where quantum kernels are likely to be useful, we recommend reading [Liu, Arunachalam & Temme (2021)](https://www.nature.com/articles/s41567-021-01287-z)."
|
||
]
|
||
}
|
||
],
|
||
"metadata": {
|
||
"description": "Quantum kernels are used initially to determine a kernel matrix element, a full kernel matrix and the interface with classical kernel tools is presented.",
|
||
"kernelspec": {
|
||
"display_name": "Python 3",
|
||
"language": "python",
|
||
"name": "python3"
|
||
},
|
||
"language_info": {
|
||
"codemirror_mode": {
|
||
"name": "ipython",
|
||
"version": 3
|
||
},
|
||
"file_extension": ".py",
|
||
"mimetype": "text/x-python",
|
||
"name": "python",
|
||
"nbconvert_exporter": "python",
|
||
"pygments_lexer": "ipython3",
|
||
"version": "3"
|
||
},
|
||
"title": "Quantum kernel methods"
|
||
},
|
||
"nbformat": 4,
|
||
"nbformat_minor": 4
|
||
}
|