qiskit-documentation/docs/guides/serverless-manage-resources...

470 lines
15 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"id": "6b7abc7b-b435-43d1-9fd8-c349ee8710f3",
"metadata": {},
"source": [
"# Manage Qiskit Serverless compute and data resources\n",
"\n",
"<LegacyContent>\n",
"<Admonition type=\"note\">\n",
"This documentation is relevant to IBM Quantum&reg; Platform Classic. If you need the newer version, go to the new [IBM Quantum Platform documentation.](https://quantum.cloud.ibm.com/docs/guides/serverless-manage-resources)\n",
"</Admonition>\n",
"</LegacyContent>\n",
"<CloudContent>\n",
"<Admonition type=\"note\">\n",
"This documentation is relevant to the new IBM Quantum&reg; Platform. If you need the previous version, return to the [IBM Quantum Platform Classic documentation.](https://docs.quantum.ibm.com/guides/serverless-manage-resources)\n",
"</Admonition>\n",
"</CloudContent>"
]
},
{
"cell_type": "markdown",
"id": "3b0771d6-95c9-46dc-955a-f8702f6a2632",
"metadata": {
"tags": [
"version-info"
]
},
"source": [
"<details>\n",
"<summary><b>Package versions</b></summary>\n",
"\n",
"The code on this page was developed using the following requirements.\n",
"We recommend using these versions or newer.\n",
"\n",
"```\n",
"qiskit[all]~=1.3.1\n",
"qiskit-ibm-runtime~=0.34.0\n",
"qiskit-aer~=0.15.1\n",
"qiskit-serverless~=0.18.0\n",
"qiskit-ibm-catalog~=0.2\n",
"qiskit-addon-sqd~=0.8.1\n",
"qiskit-addon-utils~=0.1.0\n",
"qiskit-addon-mpf~=0.2.0\n",
"scipy~=1.14.1\n",
"qiskit-addon-aqc-tensor~=0.1.2\n",
"qiskit-addon-obp~=0.1.0\n",
"scipy~=1.14.1\n",
"pyscf~=2.7.0\n",
"```\n",
"</details>"
]
},
{
"cell_type": "markdown",
"id": "95b2f280-f685-455f-83e9-b172445d7c6a",
"metadata": {},
"source": [
"With Qiskit Serverless, you can manage compute and data across your [Qiskit pattern](/docs/guides/intro-to-patterns), including CPUs, QPUs, and other compute accelerators."
]
},
{
"cell_type": "markdown",
"id": "380354c0-5cab-464d-b10f-c94055de3605",
"metadata": {},
"source": [
"## Set detailed statuses\n",
"\n",
"\n",
"Serverless workloads have several stages across a workflow. By default, the following statuses are viewable with `job.status()`:\n",
"\n",
"- **`QUEUED`**: the workload is queued for classical resources\n",
"- **`INITIALIZING`**: the workload is set up\n",
"- **`RUNNING`**: the workload is currently running on classical resources\n",
"- **`DONE`**: the workload has successfully completed\n",
"\n",
"You can also set custom statuses that further describe the specific workflow stage, as follows."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a69df8bc-5033-45bf-a837-cffa9d29b844",
"metadata": {},
"outputs": [],
"source": [
"from qiskit_serverless import update_status, Job\n",
"\n",
"## If your function has a mapping stage, particularly application functions, you can set the status to \"RUNNING: MAPPING\" as follows:\n",
"update_status(Job.MAPPING)\n",
"\n",
"## While handling transpilation, error suppression, and so forth, you can set the status to \"RUNNING: OPTIMIZING_FOR_HARDWARE\":\n",
"update_status(Job.OPTIMIZING_HARDWARE)\n",
"\n",
"## After you submit jobs to Qiskit Runtime, the underlying quantum job will be queued. You can set status to \"RUNNING: WAITING_FOR_QPU\":\n",
"update_status(Job.WAITING_QPU)\n",
"\n",
"## When the Qiskit Runtime job starts running on the QPU, set the following status \"RUNNING: EXECUTING_QPU\":\n",
"update_status(Job.EXECUTING_QPU)\n",
"\n",
"## Once QPU is completed and post-processing has begun, set the status \"RUNNING: POST_PROCESSING\":\n",
"update_status(Job.POST_PROCESSING)"
]
},
{
"cell_type": "markdown",
"id": "a8746eae-6f15-4faf-8771-0f3062efc723",
"metadata": {},
"source": [
"After successful completion of this workload (with `save_result()`), this status will be updated to `DONE` automatically."
]
},
{
"cell_type": "markdown",
"id": "b2d40a63-3359-46e9-8f1b-4746b449b407",
"metadata": {},
"source": [
"## Parallel workflows\n",
"\n",
"For classical tasks that can be parallelized, use the `@distribute_task` decorator to define compute requirements needed to perform a task. Start by recalling the `transpile_remote.py` example from the [Write your first Qiskit Serverless program](./serverless-first-program) topic with the following code.\n",
"\n",
"<LegacyContent>\n",
"The following code requires that you have already [saved your credentials](/docs/guides/setup-channel#iqp).\n",
"</LegacyContent>\n",
"<CloudContent>\n",
"The following code requires that you have already [saved your credentials](/docs/guides/cloud-setup).\n",
"</CloudContent>"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "9e41cd2f-bce6-4c8a-8e44-537c18b3023c",
"metadata": {
"tags": [
"remove-cell"
]
},
"outputs": [],
"source": [
"# This cell is hidden from users, it just creates a new folder\n",
"from pathlib import Path\n",
"\n",
"Path(\"./source_files\").mkdir(exist_ok=True)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "475d82f0-15cc-4db3-b3b0-54b07822b2a0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Overwriting ./source_files/transpile_remote.py\n"
]
}
],
"source": [
"%%writefile ./source_files/transpile_remote.py\n",
"\n",
"from qiskit.transpiler import generate_preset_pass_manager\n",
"from qiskit_ibm_runtime import QiskitRuntimeService\n",
"from qiskit_serverless import distribute_task\n",
"\n",
"service = QiskitRuntimeService()\n",
"\n",
"@distribute_task(target={\"cpu\": 1})\n",
"def transpile_remote(circuit, optimization_level, backend):\n",
" \"\"\"Transpiles an abstract circuit (or list of circuits) into an ISA circuit for a given backend.\"\"\"\n",
" pass_manager = generate_preset_pass_manager(\n",
" optimization_level=optimization_level,\n",
" backend=service.backend(backend)\n",
" )\n",
" isa_circuit = pass_manager.run(circuit)\n",
" return isa_circuit"
]
},
{
"cell_type": "markdown",
"id": "a5914f1d-f898-4db4-8d1e-ccc8081883b9",
"metadata": {},
"source": [
"In this example, you decorated the `transpile_remote()` function with `@distribute_task(target={\"cpu\": 1})`. When run, this creates an asynchronous parallel worker task with a single CPU core, and returns with a reference to track the worker. To fetch the result, pass the reference to the `get()` function. We can use this to run multiple parallel tasks:"
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "e8fd31e6-9ab9-4d75-9ef9-a2b9ff9ad37a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Appending to ./source_files/transpile_remote.py\n"
]
}
],
"source": [
"%%writefile --append ./source_files/transpile_remote.py\n",
"\n",
"from time import time\n",
"from qiskit_serverless import get, get_arguments, save_result, update_status, Job\n",
"\n",
"# Get arguments\n",
"arguments = get_arguments()\n",
"circuit = arguments.get(\"circuit\")\n",
"optimization_level = arguments.get(\"optimization_level\")\n",
"backend = arguments.get(\"backend\")"
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "74fdcd4a-01cd-46ca-aa24-2a8a3605346f",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Appending to ./source_files/transpile_remote.py\n"
]
}
],
"source": [
"%%writefile --append ./source_files/transpile_remote.py\n",
"# Start distributed transpilation\n",
"\n",
"update_status(Job.OPTIMIZING_HARDWARE)\n",
"\n",
"start_time = time()\n",
"transpile_worker_references = [\n",
" transpile_remote(circuit, optimization_level, backend)\n",
" for circuit in arguments.get(\"circuit_list\")\n",
"]\n",
"\n",
"transpiled_circuits = get(transpile_worker_references)\n",
"end_time = time()"
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "81696ede-3aa5-4e8c-9d35-fdd70c1bf4db",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Appending to ./source_files/transpile_remote.py\n"
]
}
],
"source": [
"%%writefile --append ./source_files/transpile_remote.py\n",
"# Save result, with metadata\n",
"\n",
"result = {\n",
" \"circuits\": transpiled_circuits,\n",
" \"metadata\": {\n",
" \"resource_usage\": {\n",
" \"RUNNING: OPTIMIZING_FOR_HARDWARE\": {\n",
" \"CPU_TIME\": end_time - start_time,\n",
" \"QPU_TIME\": 0,\n",
" },\n",
" }\n",
" },\n",
"}\n",
"\n",
"save_result(result)"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "df28a92c-3585-49f0-a2ea-828a34638684",
"metadata": {
"tags": [
"remove-cell"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Waiting for job (status 'QUEUED')\n",
"Waiting for job (status 'QUEUED')\n",
"Waiting for job (status 'QUEUED')\n",
"Waiting for job (status 'QUEUED')\n",
"Waiting for job (status 'INITIALIZING')\n",
"Waiting for job (status 'RUNNING: OPTIMIZING_FOR_HARDWARE')\n",
"Waiting for job (status 'RUNNING: OPTIMIZING_FOR_HARDWARE')\n",
"Job completed successfully\n"
]
}
],
"source": [
"# This cell is hidden from users.\n",
"# It uploads the serverless program and checks it runs.\n",
"\n",
"\n",
"def test_serverless_job(title, entrypoint):\n",
" # Import in function to stop them interfering with user-facing code\n",
" from qiskit.circuit.random import random_circuit\n",
" from qiskit_serverless import IBMServerlessClient, QiskitFunction\n",
" import time\n",
" import uuid\n",
"\n",
" title += \"_\" + uuid.uuid4().hex[:8]\n",
" serverless = IBMServerlessClient()\n",
" transpile_remote_demo = QiskitFunction(\n",
" title=title,\n",
" entrypoint=entrypoint,\n",
" working_dir=\"./source_files/\",\n",
" )\n",
" serverless.upload(transpile_remote_demo)\n",
" job = serverless.get(title).run(\n",
" circuit=random_circuit(3, 3),\n",
" circuit_list=[random_circuit(3, 3) for _ in range(3)],\n",
" backend=\"ibm_torino\",\n",
" optimization_level=1,\n",
" )\n",
" for retry in range(25):\n",
" time.sleep(5)\n",
" status = job.status()\n",
" if status == \"DONE\":\n",
" print(\"Job completed successfully\")\n",
" return\n",
" if status not in [\n",
" \"QUEUED\",\n",
" \"INITIALIZING\",\n",
" \"RUNNING\",\n",
" \"RUNNING: OPTIMIZING_FOR_HARDWARE\",\n",
" \"DONE\",\n",
" ]:\n",
" raise Exception(\n",
" f\"Unexpected job status '{status}'.\\nHere's the logs:\\n\"\n",
" + job.logs()\n",
" )\n",
" print(f\"Waiting for job (status '{status}')\")\n",
" raise Exception(\"Job did not complete in time\")\n",
"\n",
"\n",
"test_serverless_job(\n",
" title=\"transpile_remote_serverless_test\", entrypoint=\"transpile_remote.py\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "611fe030-4494-46b5-9ea1-9678ac513210",
"metadata": {},
"source": [
"### Explore different task configurations\n",
"\n",
"You can flexibly allocate CPU, GPU, and memory for your tasks via `@distribute_task()`. For Qiskit Serverless on IBM Quantum&reg; Platform, each program is equipped with 16 CPU cores and 32 GB RAM, which can be allocated dynamically as needed.\n",
"\n",
"CPU cores can be allocated as full CPU cores, or even fractional allocations, as shown in the following.\n",
"\n",
"Memory is allocated in number of bytes. Recall that there are 1024 bytes in a kilobyte, 1024 kilobytes in a megabyte, and 1024 megabytes in a gigabyte. To allocate 2 GB of memory for your worker, you need to allocate `\"mem\": 2 * 1024 * 1024 * 1024`."
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "cea90969-cfbf-4181-9ffa-524f3709dc69",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Appending to ./source_files/transpile_remote.py\n"
]
}
],
"source": [
"%%writefile --append ./source_files/transpile_remote.py\n",
"\n",
"@distribute_task(target={\n",
" \"cpu\": 16,\n",
" \"mem\": 2 * 1024 * 1024 * 1024\n",
"})\n",
"def transpile_remote(circuit, optimization_level, backend):\n",
" return None"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "55163053-2cd8-4e5d-8470-d08055a6f401",
"metadata": {
"tags": [
"remove-cell"
]
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Waiting for job (status 'QUEUED')\n",
"Waiting for job (status 'QUEUED')\n",
"Waiting for job (status 'QUEUED')\n",
"Waiting for job (status 'QUEUED')\n",
"Waiting for job (status 'QUEUED')\n",
"Waiting for job (status 'INITIALIZING')\n",
"Waiting for job (status 'RUNNING')\n",
"Waiting for job (status 'RUNNING: OPTIMIZING_FOR_HARDWARE')\n",
"Job completed successfully\n"
]
}
],
"source": [
"# This cell is hidden from users.\n",
"# It checks the distributed program works.\n",
"test_serverless_job(\n",
" title=\"transpile_remote_serverless_test\", entrypoint=\"transpile_remote.py\"\n",
")"
]
},
{
"cell_type": "markdown",
"id": "a004dd78-0e0a-4a3b-83cb-333469533ef6",
"metadata": {},
"source": [
"## Next steps\n",
"\n",
"<Admonition type=\"info\" title=\"Recommendations\">\n",
"\n",
"- See a full example that [ports existing code to Qiskit Serverless](./serverless-port-code).\n",
"- Read a paper in which researchers used Qiskit Serverless and quantum-centric supercomputing to [explore quantum chemistry](https://arxiv.org/abs/2405.05068v1).\n",
"\n",
"</Admonition>"
]
}
],
"metadata": {
"description": "Manage compute and data across your Qiskit pattern with Qiskit Serverless.",
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3"
},
"title": "Manage Qiskit Serverless compute and data resources"
},
"nbformat": 4,
"nbformat_minor": 2
}