qiskit-documentation/docs/tutorials/pauli-correlation-encoding-...

736 lines
42 KiB
Plaintext
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

{
"cells": [
{
"cell_type": "markdown",
"id": "1a462865-ddf0-4a8d-a59b-88fc4a0e40c9",
"metadata": {},
"source": [
"{/* cspell:ignore setminus coloneqq rbrack // latex that isn't being ignored for some reason */}\n",
"\n",
"# Pauli Correlation Encoding to reduce Maxcut requirements\n",
"\n",
"*Usage estimate: 30 minutes on IBM Sherbrooke (NOTE: This is an estimate only. Your runtime may vary.)*"
]
},
{
"cell_type": "markdown",
"id": "aa4d8e16-953c-4174-ab53-2d0835e6dac7",
"metadata": {},
"source": [
"## Background\n",
"This notebook presents *Pauli Correlation Encoding* (PCE) [\\[1\\]](#references), an approach designed to encode optimization problems into qubits with greater efficiency for quantum computation. PCE maps classical variables in optimization problems to multi-body Pauli-matrix correlations, resulting in a polynomial compression of the problem's space requirements. By employing PCE, the number of qubits needed for encoding is reduced, making it particularly advantageous for near-term quantum devices with limited qubit resources. Furthermore, it is analytically demonstrated that PCE inherently mitigates barren plateaus, offering super-polynomial resilience against this phenomenon. This built-in feature enables unprecedented performance in quantum optimization solvers."
]
},
{
"cell_type": "markdown",
"id": "6f5487b6-bb73-4d85-b0fb-56cb73e8a057",
"metadata": {},
"source": [
"### Overview\n",
"The PCE approach consists of three main steps, as illustrated in the Figure 1 from [\\[1\\]](#references) in below:\n",
"1. Encoding the optimization problem into a Pauli correlation space.\n",
"2. Solving the problem using a quantum-classical optimization solver.\n",
"3. Decoding the solution back to the original optimization space.\n",
"The PCE approach is adaptable to any quantum optimization solver capable of processing Pauli correlation matrices."
]
},
{
"attachments": {},
"cell_type": "markdown",
"id": "3c948649-9946-433d-80ef-da4e296cdade",
"metadata": {},
"source": [
"![pce-overview.png](/docs/images/tutorials/solving-maxcut-with-reduced-qubit-requirements-using-pauli-correlation-encoding/af2cb835-88db-4a3d-9c86-51424b1a4bd3.avif)"
]
},
{
"cell_type": "markdown",
"id": "c00b065d-709c-43a1-a364-e635734b4ace",
"metadata": {},
"source": [
"in the Figure 1 from [\\[1\\]](#references) , Max-Cut problem is used as an example to illustrate the PCE approach. The Max-Cut problem with $m=9$ nodes is encoded into a Pauli correlation space, representing the optimization problem as a correlation matrix, specifically, 2-body Pauli-matrix correlations across $n=3$ qubits $(Q_1, Q_2, Q_3)$. Node colors indicate the Pauli string used for each encoded node.\n",
"For example, that node 1, which corresponds to binary variable $x_1$, is encoded by the expectation value of $Z_1 \\otimes Z_2 \\otimes I_3$, while $x_8$ is encoded by $I_1 \\otimes Y_2 \\otimes Y_3$.\n",
"This corresponds to compressing the problems $m$ variables into $ n = O(m^{1/2})$ qubits. More broadly, $k $-body correlations enable polynomial compressions of order $k$. The chosen Pauli set comprises three subsets of mutually-commuting Pauli strings, allowing all $m$ correlations to be experimentally estimated with only three measurement settings.\n",
"\n",
"A loss function $\\mathcal{L}$ of Pauli expectation values that imitates the original Max-Cut objective function is constructed. The loss function is then optimized using a quantum-classical optimization solver, such as the Variational Quantum Eigensolver (VQE).\n",
"\n",
"Once the optimization is complete, the solution is decoded back to the original optimization space, yielding the optimal Max-Cut solution.\n",
"\n",
"## Requirements\n",
"\n",
"Before starting this tutorial, be sure you have the following installed:\n",
"- Qiskit SDK v1.0 or later, with visualization support ( `pip install 'qiskit[visualization]'` )\n",
"- Qiskit Runtime 0.22 or later (`pip install qiskit-ibm-runtime`)\n",
"- Rustworkx graph library (`pip install rustworkx`)"
]
},
{
"cell_type": "markdown",
"id": "95dd433f-95bc-47ae-bc88-9366107c4179",
"metadata": {},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "68ed8eb1-f8fa-4cff-b98c-662e67577b79",
"metadata": {},
"outputs": [],
"source": [
"from itertools import combinations\n",
"\n",
"import numpy as np\n",
"import rustworkx as rx\n",
"from scipy.optimize import minimize\n",
"\n",
"from qiskit.circuit.library import efficient_su2\n",
"from qiskit.transpiler.preset_passmanagers import generate_preset_pass_manager\n",
"from qiskit.quantum_info import SparsePauliOp\n",
"from qiskit_ibm_runtime import EstimatorV2 as Estimator\n",
"from qiskit_ibm_runtime import QiskitRuntimeService\n",
"from qiskit_ibm_runtime import Session\n",
"from rustworkx.visualization import mpl_draw\n",
"\n",
"service = QiskitRuntimeService()\n",
"backend = service.least_busy(\n",
" operational=True, simulator=False, min_num_qubits=127\n",
")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fbc400e4-3f40-45f9-ad65-b46fb4ef4ed3",
"metadata": {},
"outputs": [],
"source": [
"def calc_cut_size(graph, partition0, partition1):\n",
" \"\"\"Calculate the cut size of the given partitions of the graph.\"\"\"\n",
"\n",
" cut_size = 0\n",
" for edge0, edge1 in graph.edge_list():\n",
" if edge0 in partition0 and edge1 in partition1:\n",
" cut_size += 1\n",
" elif edge0 in partition1 and edge1 in partition0:\n",
" cut_size += 1\n",
" return cut_size"
]
},
{
"cell_type": "markdown",
"id": "0b66948c-dcdc-4710-b85f-bb80a4f454ae",
"metadata": {},
"source": [
"## Step 1: Map classical inputs to a quantum problem\n",
"\n",
"### Max-Cut Problem\n",
"The Max-Cut problem is a combinatorial optimization problem that is defined on a graph $G = (V, E)$, where $V$ is the set of vertices and $E$ is the set of edges. The goal is to partition the vertices into two sets, $S$ and $V \\setminus S$, such that the number of edges between the two sets is maximized.\n",
"For the detailed description of the Max-Cut problem, please refer to the [\"Solve utility-scale quantum optimization problems\"](https://learning.quantum.ibm.com/tutorial/quantum-approximate-optimization-algorithm) tutorial.\n",
"Also, the Max-Cut problem is used as an example in the tutorial [\"Advanced Techniques for QAOA\"](https://learning.quantum.ibm.com/tutorial/advanced-techniques-for-qaoa).\n",
"In those tutorials, the QAOA algorithm is used to solve the Max-Cut problem.\n",
"\n",
"\n",
"### Graph -> Hamiltonian\n",
"In this tutorial, we will use a random graph with 1000 nodes.\n",
"\n",
"The problem size might be hard to visualize, so below is a graph with 100 nodes. (Rendering a graph with 1,000 nodes directly would make it too dense to see anything!) The graph we are working with is ten times larger."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d5fd6806-ddb5-459a-b220-491c30782071",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<Image src=\"/docs/images/tutorials/pauli-correlation-encoding-for-qaoa/extracted-outputs/d5fd6806-ddb5-459a-b220-491c30782071-0.avif\" alt=\"Output of the previous code cell\" />"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"mpl_draw(rx.undirected_gnp_random_graph(100, 0.1, seed=42))"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "08be5474-f2c4-4ea1-ad6d-02a018d4ab10",
"metadata": {},
"outputs": [],
"source": [
"num_nodes = 1000 # Number of nodes in graph\n",
"graph = rx.undirected_gnp_random_graph(num_nodes, 0.1, seed=42)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "c82e6e75-330b-4579-b0e9-e656b21733f3",
"metadata": {},
"outputs": [],
"source": [
"import networkx as nx\n",
"\n",
"nx_graph = nx.Graph()\n",
"nx_graph.add_nodes_from(range(num_nodes))"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "f1492e9e-6c32-4779-baeb-5aa18fdcec15",
"metadata": {},
"outputs": [],
"source": [
"for edge in graph.edge_list():\n",
" nx_graph.add_edge(edge[0], edge[1])"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "44017c81-93eb-4bfc-9096-0bbfdcf4074c",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Initial cut size: 28075\n"
]
}
],
"source": [
"curr_cut_size, partition = nx.approximation.one_exchange(nx_graph, seed=1)\n",
"print(f\"Initial cut size: {curr_cut_size}\")"
]
},
{
"cell_type": "markdown",
"id": "471c4f9f-1cf0-45c9-82c2-7c33d5044ea1",
"metadata": {},
"source": [
"We encode the graph with 1000 nodes into 2-body Pauli-matrix correlations across 100 qubits. The graph is represented as a correlation matrix, where each node is encoded by a Pauli string. The sign of the expectation value of the Pauli string indicates the partition of the node. For example, node 0 is encoded by a Pauli string, $\\prod_0 = I_{19} \\otimes ... I_2 \\otimes X_1 \\otimes X_0$. The sign of the expectation value of this Pauli string indicates the partition of node 0. We define a *Pauli-correlation encoding* (PCE) relative to $\\prod$ as\n",
"\n",
"$ x_i \\coloneqq \\textit{sgn}(\\langle\\prod_i \\rangle) $\n",
"\n",
"where $x_i$ is the partition of node $i$ and $\\langle \\prod_i \\rangle \\coloneqq \\langle \\psi |\\prod_i| \\psi \\rangle $ is the expectation value of the Pauli string encoding node $i$ over a quantum state $|\\psi \\rangle$."
]
},
{
"cell_type": "markdown",
"id": "aba883b0-1e4c-467d-8629-5bf375a9437b",
"metadata": {},
"source": [
"Now, let's encode the graph into a Hamiltonian using PCE.\n",
"We divide the nodes into three sets: $S_1$, $S_2$, and $S_3$.\n",
"Then, we encode the nodes in each set using the Pauli strings with $X$, $Y$, and $Z$, respectively."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "4fdd4f36-da2a-4901-9dea-93f3a3f9bcd2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"List 1: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332]\n",
"List 2: [333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665]\n",
"List 3: [666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 691, 692, 693, 694, 695, 696, 697, 698, 699, 700, 701, 702, 703, 704, 705, 706, 707, 708, 709, 710, 711, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 772, 773, 774, 775, 776, 777, 778, 779, 780, 781, 782, 783, 784, 785, 786, 787, 788, 789, 790, 791, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801, 802, 803, 804, 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820, 821, 822, 823, 824, 825, 826, 827, 828, 829, 830, 831, 832, 833, 834, 835, 836, 837, 838, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 852, 853, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999]\n"
]
}
],
"source": [
"num_qubits = 100\n",
"\n",
"list_size = num_nodes // 3\n",
"node_x = [i for i in range(list_size)]\n",
"node_y = [i for i in range(list_size, 2 * list_size)]\n",
"node_z = [i for i in range(2 * list_size, num_nodes)]\n",
"\n",
"print(\"List 1:\", node_x)\n",
"print(\"List 2:\", node_y)\n",
"print(\"List 3:\", node_z)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "331e0815-e32d-43d6-95cc-107e141482fd",
"metadata": {},
"outputs": [],
"source": [
"def build_pauli_correlation_encoding(pauli, node_list, n, k=2):\n",
" pauli_correlation_encoding = []\n",
" for idx, c in enumerate(combinations(range(n), k)):\n",
" if idx >= len(node_list):\n",
" break\n",
" paulis = [\"I\"] * n\n",
" paulis[c[0]], paulis[c[1]] = pauli, pauli\n",
" pauli_correlation_encoding.append((\"\".join(paulis)[::-1], 1))\n",
"\n",
" hamiltonian = []\n",
" for pauli, weight in pauli_correlation_encoding:\n",
" hamiltonian.append(SparsePauliOp.from_list([(pauli, weight)]))\n",
"\n",
" return hamiltonian\n",
"\n",
"\n",
"pauli_correlation_encoding_x = build_pauli_correlation_encoding(\n",
" \"X\", node_x, num_qubits\n",
")\n",
"pauli_correlation_encoding_y = build_pauli_correlation_encoding(\n",
" \"Y\", node_y, num_qubits\n",
")\n",
"pauli_correlation_encoding_z = build_pauli_correlation_encoding(\n",
" \"Z\", node_z, num_qubits\n",
")"
]
},
{
"cell_type": "markdown",
"id": "6fe3a3ca-328b-498c-8b6d-f663b584362f",
"metadata": {},
"source": [
"## Step 2: Optimize problem for quantum hardware execution\n",
"\n",
"### Quantum circuit\n",
"Here, the state $|\\psi \\rangle$ is parameterized with $\\mathbf{\\theta}$, and we optimize these parameters $\\mathbf{\\theta}$ using a variational approach.\n",
"In this tutorial, we employ the `efficient_su2` ansatz for our variational algorithm due to its expressive capabilities and ease of implementation.\n",
"We also use the relaxed loss function, which will be introduced later in this tutorial.\n",
"As a result, we can address large-scale problems with fewer qubits and shallower circuit depths."
]
},
{
"cell_type": "code",
"execution_count": 8,
"id": "355c087d-c1c7-45af-b232-64ef7377bb2e",
"metadata": {},
"outputs": [],
"source": [
"# Build the quantum circuit\n",
"qc = efficient_su2(num_qubits, [\"ry\", \"rz\"], reps=2)"
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "09f8ba46-1b53-43aa-9180-a44640d28696",
"metadata": {},
"outputs": [],
"source": [
"# Optimize the circuit\n",
"\n",
"pm = generate_preset_pass_manager(optimization_level=3, backend=backend)\n",
"qc = pm.run(qc)"
]
},
{
"cell_type": "markdown",
"id": "dec4a50d-f981-42a1-8b8a-c009d9d95f4f",
"metadata": {},
"source": [
"### Loss function\n",
"For the loss function $\\mathcal{L}$, we use a relaxation of the Max-Cut objective function as described in [\\[1\\]](#references), which is defined as $\\mathcal{V}(\\mathbf{x}) \\coloneqq \\sum_{(i, j) \\in E} W_{i, j}(1-x_i x_j)$. Here, $W_{i, j}$ denotes the weight of the edge $(i, j)$, and $x_i$ represents the partition of node $i$.\n",
"The loss function $\\mathcal{L}$ is given by:\n",
"\n",
"$\\mathcal{L}\\coloneqq \\sum_{(i, j) \\in E} W_{i, j} \\text{tanh} (\\alpha \\langle\\prod_i \\rangle) \\text{tanh} (\\alpha \\langle\\prod_j \\rangle) + \\mathcal{L}^{(\\text{reg})}$\n",
"\n",
"where the Max-Cut objective function is replaced by the smooth hyperbolic tangents of the expectation values of the Pauli strings encoding the nodes. The regularization term $\\mathcal{L}^{(\\text{reg})}$ and the rescaling factor $\\alpha$, proportional to the number of qubits, are introduced to improve the solver's performance.\n",
"\n",
"The regularization term is defined as:\n",
"\n",
"$\\mathcal{L}^{(\\text{reg})}$ is defined as $\\mathcal{L}^{(\\text{reg})} \\coloneqq \\beta \\nu \\lbrack \\frac{1}{m} \\sum_{i \\in V} \\text{tanh} (\\alpha \\langle\\prod_i \\rangle)^2 \\rbrack ^2$\n",
"\n",
"where $\\beta=1/2$, $\\nu = |E|/2 + (m -1) /4$, and $m$ is the number of nodes in the graph."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "b699ef86-40ed-431c-a93f-5a12bb70e4fd",
"metadata": {},
"outputs": [],
"source": [
"def loss_func_estimator(x, ansatz, hamiltonian, estimator, graph):\n",
" \"\"\"\n",
" Calculates the specified loss function for the given ansatz, Hamiltonian, and graph.\n",
"\n",
" The expectation values of each Pauli string in the Hamiltonian are first obtained\n",
" by running the ansatz on the quantum backend. These expectation values are then\n",
" passed through the nonlinear function tanh(alpha * prod_i). The loss function is\n",
" subsequently computed from these transformed values.\n",
" \"\"\"\n",
" job = estimator.run(\n",
" [\n",
" (ansatz, hamiltonian[0], x),\n",
" (ansatz, hamiltonian[1], x),\n",
" (ansatz, hamiltonian[2], x),\n",
" ]\n",
" )\n",
" result = job.result()\n",
"\n",
" # calculate the loss function\n",
" node_exp_map = {}\n",
" idx = 0\n",
" for r in result:\n",
" for ev in r.data.evs:\n",
" node_exp_map[idx] = ev\n",
" idx += 1\n",
"\n",
" loss = 0\n",
" alpha = num_qubits\n",
" for edge0, edge1 in graph.edge_list():\n",
" loss += np.tanh(alpha * node_exp_map[edge0]) * np.tanh(\n",
" alpha * node_exp_map[edge1]\n",
" )\n",
"\n",
" regulation_term = 0\n",
" for i in range(len(graph.nodes())):\n",
" regulation_term += np.tanh(alpha * node_exp_map[i]) ** 2\n",
" regulation_term = regulation_term / len(graph.nodes())\n",
" regulation_term = regulation_term**2\n",
" beta = 1 / 2\n",
" v = len(graph.edges()) / 2 + (len(graph.nodes()) - 1) / 4\n",
" regulation_term = beta * v * regulation_term\n",
"\n",
" loss = loss + regulation_term\n",
"\n",
" global experiment_result\n",
" print(f\"Iter {len(experiment_result)}: {loss}\")\n",
" experiment_result.append({\"loss\": loss, \"exp_map\": node_exp_map})\n",
" return loss"
]
},
{
"cell_type": "markdown",
"id": "c97b0436-1411-426f-977f-dff3d324ed75",
"metadata": {},
"source": [
"## Step 3: Execute using Qiskit primitives"
]
},
{
"cell_type": "markdown",
"id": "865fe32c-d47b-40bc-b68f-1d17e7627966",
"metadata": {},
"source": [
"In this tutorial, we set `max_iter=50` for the optimization loop for demonstration purpose. If we increase the number of iterations, we can expect better results."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "3d7ee525-a426-46ed-8773-881095162f11",
"metadata": {},
"outputs": [],
"source": [
"pce = []\n",
"pce.append(\n",
" [op.apply_layout(qc.layout) for op in pauli_correlation_encoding_x]\n",
")\n",
"pce.append(\n",
" [op.apply_layout(qc.layout) for op in pauli_correlation_encoding_y]\n",
")\n",
"pce.append(\n",
" [op.apply_layout(qc.layout) for op in pauli_correlation_encoding_z]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "21458812-a52b-4de3-a330-83016256900f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Iter 0: 16659.649201600296\n",
"Iter 1: 12104.242957555361\n",
"Iter 2: 6541.137221994661\n",
"Iter 3: 6650.6188244671985\n",
"Iter 4: 7033.193518185085\n",
"Iter 5: 6743.687931793412\n",
"Iter 6: 6223.574718684094\n",
"Iter 7: 6457.3302709535965\n",
"Iter 8: 6581.316449107595\n",
"Iter 9: 6365.761052029896\n",
"Iter 10: 6415.872673527322\n",
"Iter 11: 6421.996561600348\n",
"Iter 12: 6636.372822791712\n",
"Iter 13: 6965.174320702346\n",
"Iter 14: 6774.236562696287\n",
"Iter 15: 6393.837617108355\n",
"Iter 16: 6234.311401676519\n",
"Iter 17: 6518.192237615901\n",
"Iter 18: 6559.933925068997\n",
"Iter 19: 6646.157979243488\n",
"Iter 20: 6573.726111605048\n",
"Iter 21: 6190.642092901959\n",
"Iter 22: 6653.06500163594\n",
"Iter 23: 6545.713700369988\n",
"Iter 24: 6399.996441760465\n",
"Iter 25: 6115.959687941808\n",
"Iter 26: 6665.915093554849\n",
"Iter 27: 6832.882201259893\n",
"Iter 28: 6541.392749578919\n",
"Iter 29: 6813.3456910443165\n",
"Iter 30: 6460.800944368402\n",
"Iter 31: 6359.635437029245\n",
"Iter 32: 6040.891641882451\n",
"Iter 33: 6573.930674936448\n",
"Iter 34: 6668.031753293785\n",
"Iter 35: 6450.002712889748\n",
"Iter 36: 6519.8298811058075\n",
"Iter 37: 6467.134502398199\n",
"Iter 38: 6655.284651397334\n",
"Iter 39: 6371.168353987336\n",
"Iter 40: 6480.337259347923\n",
"Iter 41: 6339.256786764425\n",
"Iter 42: 6588.635046825541\n",
"Iter 43: 6617.677964971322\n",
"Iter 44: 6469.0441600679205\n",
"Iter 45: 6567.874244906106\n",
"Iter 46: 6217.899975264532\n",
"Iter 47: 6783.481394627947\n",
"Iter 48: 6813.371853626112\n",
"Iter 49: 6506.5871531488765\n",
" message: Maximum number of function evaluations has been exceeded.\n",
" success: False\n",
" status: 2\n",
" fun: 6040.891641882451\n",
" x: [ 1.375e+00 1.951e+00 ... 1.923e-01 4.087e-02]\n",
" nfev: 50\n",
" maxcv: 0.0\n"
]
}
],
"source": [
"# Run the optimization using Session\n",
"\n",
"with Session(backend=backend) as session:\n",
" estimator = Estimator(mode=session)\n",
"\n",
" experiment_result = []\n",
"\n",
" def loss_func(x):\n",
" return loss_func_estimator(\n",
" x, qc, [pce[0], pce[1], pce[2]], estimator, graph\n",
" )\n",
"\n",
" np.random.seed(42)\n",
" initial_params = np.random.rand(qc.num_parameters)\n",
" result = minimize(\n",
" loss_func, initial_params, method=\"COBYLA\", options={\"maxiter\": 50}\n",
" )\n",
"print(result)"
]
},
{
"cell_type": "markdown",
"id": "2f642e7c-df66-497a-86d9-21a750508677",
"metadata": {},
"source": [
"## Step 4: Post-process and return result in desired classical format\n",
"\n",
"The partitions of the nodes are determined by evaluating the sign of the expectation values of the Pauli strings that encode the nodes."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "222b751e-6e1b-4c44-a986-2583ef05c51e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{0, 1, 4, 8, 9, 10, 12, 13, 14, 15, 16, 18, 25, 27, 31, 32, 34, 36, 38, 39, 40, 41, 44, 46, 47, 48, 49, 50, 51, 52, 57, 60, 61, 62, 63, 64, 65, 66, 68, 71, 79, 81, 82, 86, 88, 91, 92, 93, 94, 95, 96, 99, 100, 105, 106, 107, 112, 114, 115, 121, 123, 129, 133, 134, 145, 147, 161, 165, 166, 168, 171, 173, 184, 185, 187, 188, 192, 193, 194, 196, 197, 198, 202, 205, 206, 207, 208, 209, 210, 211, 215, 217, 218, 219, 220, 221, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 238, 241, 242, 243, 244, 246, 247, 248, 249, 251, 252, 253, 255, 256, 257, 258, 259, 261, 262, 264, 265, 266, 268, 269, 270, 272, 273, 275, 276, 277, 278, 279, 281, 283, 284, 285, 286, 288, 292, 293, 294, 299, 300, 303, 305, 306, 307, 308, 310, 312, 313, 314, 316, 317, 319, 321, 326, 327, 328, 333, 336, 338, 340, 341, 342, 344, 345, 346, 349, 351, 352, 353, 356, 357, 360, 361, 362, 363, 364, 366, 368, 370, 374, 378, 379, 380, 381, 382, 383, 384, 386, 387, 388, 389, 390, 391, 393, 394, 395, 396, 397, 398, 404, 405, 406, 409, 411, 413, 415, 416, 418, 421, 425, 426, 427, 428, 429, 433, 434, 435, 437, 444, 450, 456, 457, 458, 459, 462, 463, 465, 467, 469, 470, 472, 476, 479, 484, 487, 489, 492, 493, 497, 498, 499, 502, 506, 508, 513, 516, 517, 518, 519, 521, 523, 526, 527, 528, 531, 532, 533, 535, 536, 537, 539, 540, 541, 542, 543, 544, 545, 547, 549, 550, 552, 557, 562, 563, 564, 565, 567, 568, 569, 570, 571, 572, 573, 576, 578, 579, 580, 583, 585, 587, 588, 589, 591, 595, 596, 597, 600, 602, 603, 604, 605, 606, 607, 608, 609, 610, 612, 618, 619, 623, 624, 625, 626, 627, 628, 630, 632, 636, 637, 640, 644, 646, 649, 652, 656, 657, 658, 659, 661, 662, 663, 664, 667, 669, 670, 671, 672, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 690, 692, 693, 694, 695, 696, 698, 700, 701, 703, 706, 707, 708, 709, 712, 713, 714, 716, 717, 718, 719, 721, 722, 723, 724, 725, 726, 728, 730, 731, 733, 734, 735, 737, 739, 740, 741, 743, 744, 746, 748, 750, 751, 752, 753, 754, 758, 760, 761, 762, 763, 764, 765, 766, 774, 778, 780, 782, 787, 795, 800, 802, 803, 808, 809, 812, 818, 822, 825, 827, 834, 836, 840, 843, 845, 847, 850, 853, 854, 857, 858, 863, 864, 865, 866, 867, 868, 869, 870, 872, 873, 874, 875, 876, 878, 880, 881, 882, 883, 884, 885, 887, 888, 889, 890, 891, 893, 894, 895, 896, 898, 901, 902, 903, 904, 905, 907, 908, 910, 911, 912, 913, 914, 915, 916, 917, 918, 920, 921, 923, 925, 926, 928, 929, 930, 932, 934, 935, 936, 938, 939, 941, 943, 945, 946, 947, 948, 949, 953, 955, 956, 957, 958, 959, 961, 966, 975, 978, 980, 983, 988, 990, 996, 999} {2, 3, 5, 6, 7, 11, 17, 19, 20, 21, 22, 23, 24, 26, 28, 29, 30, 33, 35, 37, 42, 43, 45, 53, 54, 55, 56, 58, 59, 67, 69, 70, 72, 73, 74, 75, 76, 77, 78, 80, 83, 84, 85, 87, 89, 90, 97, 98, 101, 102, 103, 104, 108, 109, 110, 111, 113, 116, 117, 118, 119, 120, 122, 124, 125, 126, 127, 128, 130, 131, 132, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 146, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 162, 163, 164, 167, 169, 170, 172, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 186, 189, 190, 191, 195, 199, 200, 201, 203, 204, 212, 213, 214, 216, 222, 223, 224, 237, 239, 240, 245, 250, 254, 260, 263, 267, 271, 274, 280, 282, 287, 289, 290, 291, 295, 296, 297, 298, 301, 302, 304, 309, 311, 315, 318, 320, 322, 323, 324, 325, 329, 330, 331, 332, 334, 335, 337, 339, 343, 347, 348, 350, 354, 355, 358, 359, 365, 367, 369, 371, 372, 373, 375, 376, 377, 385, 392, 399, 400, 401, 402, 403, 407, 408, 410, 412, 414, 417, 419, 420, 422, 423, 424, 430, 431, 432, 436, 438, 439, 440, 441, 442, 443, 445, 446, 447, 448, 449, 451, 452, 453, 454, 455, 460, 461, 464, 466, 468, 471, 473, 474, 475, 477, 478, 480, 481, 482, 483, 485, 486, 488, 490, 491, 494, 495, 496, 500, 501, 503, 504, 505, 507, 509, 510, 511, 512, 514, 515, 520, 522, 524, 525, 529, 530, 534, 538, 546, 548, 551, 553, 554, 555, 556, 558, 559, 560, 561, 566, 574, 575, 577, 581, 582, 584, 586, 590, 592, 593, 594, 598, 599, 601, 611, 613, 614, 615, 616, 617, 620, 621, 622, 629, 631, 633, 634, 635, 638, 639, 641, 642, 643, 645, 647, 648, 650, 651, 653, 654, 655, 660, 665, 666, 668, 673, 691, 697, 699, 702, 704, 705, 710, 711, 715, 720, 727, 729, 732, 736, 738, 742, 745, 747, 749, 755, 756, 757, 759, 767, 768, 769, 770, 771, 772, 773, 775, 776, 777, 779, 781, 783, 784, 785, 786, 788, 789, 790, 791, 792, 793, 794, 796, 797, 798, 799, 801, 804, 805, 806, 807, 810, 811, 813, 814, 815, 816, 817, 819, 820, 821, 823, 824, 826, 828, 829, 830, 831, 832, 833, 835, 837, 838, 839, 841, 842, 844, 846, 848, 849, 851, 852, 855, 856, 859, 860, 861, 862, 871, 877, 879, 886, 892, 897, 899, 900, 906, 909, 919, 922, 924, 927, 931, 933, 937, 940, 942, 944, 950, 951, 952, 954, 960, 962, 963, 964, 965, 967, 968, 969, 970, 971, 972, 973, 974, 976, 977, 979, 981, 982, 984, 985, 986, 987, 989, 991, 992, 993, 994, 995, 997, 998}\n"
]
}
],
"source": [
"# Calculate the partitions based on the final expectation values\n",
"# If the expectation value is positive, the node belongs to partition 0 (par0)\n",
"# Otherwise, the node belongs to partition 1 (par1)\n",
"\n",
"par0, par1 = set(), set()\n",
"\n",
"for i in experiment_result[-1][\"exp_map\"]:\n",
" if experiment_result[-1][\"exp_map\"][i] >= 0:\n",
" par0.add(i)\n",
" else:\n",
" par1.add(i)\n",
"print(par0, par1)"
]
},
{
"cell_type": "markdown",
"id": "74b5197c-2a18-4d11-b003-e891638d5f7a",
"metadata": {},
"source": [
"We can calculate the cut size of the Max-Cut problem using the partitions of the node."
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "1c989ac9-bf6c-46ad-8c83-82beea141629",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Cut size: 24682\n"
]
}
],
"source": [
"cut_size = calc_cut_size(graph, par0, par1)\n",
"print(f\"Cut size: {cut_size}\")"
]
},
{
"cell_type": "markdown",
"id": "f6d23122-afed-4276-ad13-02bfa33d2117",
"metadata": {},
"source": [
"Once the training is complete, we perform one round of single-bit swap search to improve the solution as a classical post-processing step.\n",
"In this process, we swap the partitions of two nodes and evaluate the cut size. If the cut size is improved, we keep the swap. We repeat this process for all possible pairs of nodes connected by an edge."
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "72ff8a19-b91c-482b-8618-681d31f789f2",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1]\n"
]
}
],
"source": [
"best_bits = []\n",
"cur_bits = []\n",
"\n",
"for i in experiment_result[-1][\"exp_map\"]:\n",
" if experiment_result[-1][\"exp_map\"][i] >= 0:\n",
" cur_bits.append(1)\n",
" else:\n",
" cur_bits.append(0)\n",
"print(cur_bits)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "1cb7dd4b-6445-4c4b-881d-a7b97c7d51b9",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"24733 [1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1]\n"
]
}
],
"source": [
"# Swap the partitions and calculate the cut size\n",
"best_cut = 0\n",
"for edge0, edge1 in graph.edge_list():\n",
" swapped_bits = cur_bits.copy()\n",
" swapped_bits[edge0], swapped_bits[edge1] = (\n",
" swapped_bits[edge1],\n",
" swapped_bits[edge0],\n",
" )\n",
"\n",
" cur_partition = [set(), set()]\n",
" for i, bit in enumerate(swapped_bits):\n",
" if bit > 0:\n",
" cur_partition[0].add(i)\n",
" else:\n",
" cur_partition[1].add(i)\n",
" cut_size = calc_cut_size(graph, cur_partition[0], cur_partition[1])\n",
" if best_cut < cut_size:\n",
" best_cut = cut_size\n",
" best_bits = swapped_bits\n",
"\n",
"print(best_cut, best_bits)"
]
},
{
"cell_type": "markdown",
"id": "18d987bd-a62a-4b34-8a76-40c80ae629f7",
"metadata": {},
"source": [
"## References\n",
"\n",
"[1] Sciorilli, M., Borges, L., Patti, T. L., García-Martín, D., Camilo, G., Anandkumar, A., & Aolita, L. (2024). Towards large-scale quantum optimization solvers with few qubits. arXiv preprint arXiv:2401.09421."
]
},
{
"cell_type": "markdown",
"id": "871ea139-036d-4d77-992f-6df032b2c4a4",
"metadata": {},
"source": [
"## Tutorial survey\n",
"\n",
"Please take one minute to provide feedback on this tutorial. Your insights will help us improve our content offerings and user experience.\n",
"\n",
"[Link to survey](https://your.feedback.ibm.com/jfe/form/SV_8ANZAlsKSFf6DA2)"
]
},
{
"cell_type": "markdown",
"id": "3473b973-f7b3-4666-9e0c-6acc5b89d5d0",
"metadata": {},
"source": [
"© IBM Corp. 2025"
]
}
],
"metadata": {
"description": "Use Pauli Correlation Encoding to encode optimization problems into qubits with greater efficiency for quantum computation.",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3"
},
"platform": "cloud",
"title": "Pauli Correlation Encoding to reduce Maxcut requirements"
},
"nbformat": 4,
"nbformat_minor": 4
}