-
Notifications
You must be signed in to change notification settings - Fork 16
Integrating MQT bench into red-queen #45
base: main
Are you sure you want to change the base?
Conversation
Add MQT benchmarks to Red Queen framework
Archana R seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a few inline comments, but the biggest question for me is how you want to call into mqt. Right now it looks like you're copying the benchmark code from mqt in a few places and calling one to generate the bencmark circuit. I think it would be easier if we just called mqt directly and add mqt.bench
to the requirements list, the the actual benchmark test code is fairly simple and just about building up the parameterizations in pytest and pass the constructors to mqt to get the expected circuit.
@@ -0,0 +1,25 @@ | |||
OPENQASM 2.0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need the qasm files anymore if we're going straight from mqt -> QuantumCircuit?
qasm_path = "/home/archana/qamp-fall-22/red-queen/red_queen/applications/games/qasm" | ||
def set_qasm_output_path(new_path: str = "/home/archana/qamp-fall-22/red-queen/red_queen/applications/games/qasm"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should be using relative paths instead of an absolute path with your local home directory. Otherwise when I run this I'll get a filesystem error because this path doesn't exist.
def create_circuit(num_qubits: int): | ||
"""Returns a quantum circuit implementing Quantum Amplitude Estimation. | ||
Keyword arguments: | ||
num_qubits -- number of qubits of the returned quantum circuit | ||
""" | ||
|
||
ae = AmplitudeEstimation( | ||
num_eval_qubits=num_qubits - 1, # -1 because of the to be estimated qubit | ||
) | ||
problem = utils.get_estimation_problem() | ||
|
||
qc = ae.construct_circuit(problem) | ||
qc.name = "ae" | ||
qc.measure_all() | ||
|
||
return qc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What ends up using this? I didn't see any callers elsewhere in the PR.
@pytest.mark.parametrize("optimization_level", [0, 1, 2, 3]) | ||
@pytest.mark.parametrize("backend", backends) | ||
@pytest.mark.parametrize("method", ["normal"]) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pytest.mark.qiskit | ||
@pytest.mark.parametrize("optimization_level", [0, 1, 2, 3]) | ||
@pytest.mark.parametrize("backend", backends) | ||
@pytest.mark.parametrize("method", ["normal"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would just remove this parameter since there isn't a mid circuit measurement variant of an amplitude estimation circuit.
if method == "normal": | ||
benchmark.name = "Amplitude Estimation" | ||
circ = build_ae_circuit(8,False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of doing this couldn't you just call circ = mqt.bench.benchmarks.ae.create_circuit(num_qubits)
? It also looks like there is a helper function get_one_benchmark()
which can be used to generate the circuits from a single caller: https://github.com/cda-tum/MQTBench/blob/a6cb6f18eb08255eb44c1527ae4ee4d581a451ad/mqt/bench/benchmark_generator.py#L476
|
||
def bench_qiskit_ae(benchmark, optimization_level, backend, method): | ||
shots = 65536 | ||
expected_counts = {"8": shots} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this correct? This isn't a valid counts string, it'd be "1000" to show 8. But also I didn't think the number of qubits would be the expected answer 100% of the time for an amplitude estimation problem.
@pytest.mark.parametrize("optimization_level", [0, 1, 2, 3]) | ||
@pytest.mark.parametrize("backend", backends) | ||
@pytest.mark.parametrize("method", ["normal"]) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be really cool if we could add a parameterized sweep on the number of qubits to the benchmark here. Something like:
@pytest.mark.parameterize("num_qubits", np.logspace(0, 2, dtype=int))
and the build an amplitude estimation circuit with the variable size.
Add MQT benchmarks to Red Queen framework: This is part of QAMP Fall 2022 project. This pull request is meant to be a preliminary sanity check of the structure of the files and the method.