-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Power Rabi conversion #275
base: sketch/SCquam/better_node_structure
Are you sure you want to change the base?
Power Rabi conversion #275
Conversation
…ure' into better_node_structure_Power_Rabi
…ure' into better_node_structure_Power_Rabi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please rename Power_Rabi
and any other module names to be lowercase? https://peps.python.org/pep-0008/#package-and-module-names
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
amps = np.arange(
node.parameters.min_amp_factor,
node.parameters.max_amp_factor,
node.parameters.amp_factor_step,
)
# Number of applied Rabi pulses sweep
if N_pi > 1:
if operation == "x180":
N_pi_vec = np.arange(1, N_pi, 2).astype("int")
elif operation in ["x90", "-x90", "y90", "-y90"]:
N_pi_vec = np.arange(2, N_pi, 4).astype("int")
else:
raise ValueError(f"Unrecognized operation {operation}.")
else:
N_pi_vec = np.linspace(1, N_pi, N_pi).astype("int")[::2]
The code here fits into the category of defining sweep ranges, and I just like Ramsey, I want these to be put into functions in the parameters.py
class for Power Rabi.
There are three main reasons for this:
- It's not very insightful for the user to see all of the if-else statements which go into constructing the sweep range for different node parameters- they only usually care about one, and even then, don't need to see what it's doing
- If you separate them into functions like how it's done in Ramsey, it makes it harder for bugs to hide. Each different branch of the if-else statement is put into it's own function (if the name doesn't match the behaviour, there is a bug) and the functions become testable if needed.
- These functions can be called at any point, using only the
node.parameters
as input to reliably reproduce the sweep axes, instead of having to worry about passing it around to different functions. So, the sweep axes will be safer from a. being recreated with duplicated code somewhere and b. being modified accidentally somewhere and used again incorrectly.
load_data_id = None, | ||
multiplexed = True | ||
), | ||
) | ||
|
||
|
||
# %% {Initialize_QuAM_and_QOP} | ||
# Class containing tools to help handling units and conversions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove redundant comments everywhere
num_qubits = len(qubits) | ||
# Open Communication with the QOP | ||
if node.parameters.load_data_id is None: | ||
qmm = machine.connect() | ||
|
||
|
||
# %% {QUA_program} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aside from removing redundant comments, inline comments should be used even more sparingly. https://peps.python.org/pep-0008/#inline-comments
The code itself here is "self-documenting". It is obvious that
n_avg = node.parameters.num_averages # The number of averages
^^^^^^^^^^^^^^^^^ unnecessary
is the number of averages.
# Bring the active qubits to the minimum frequency point | ||
machine.set_all_fluxes(flux_point=flux_point, target=qubit) | ||
|
||
for i, multiplexed_qubits in qubits.batch(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a subtle but important bug here. Before, it was simple enough to do:
for i, qubit in enumerate(qubits):
because each qubit was played and processed one after the other, and its index in the list, i
, was enough to identify it uniquely in the stream processing arrays, i.e., state[i]
corresponds to the state of i
th qubit in the list.
However, when moving to "batched" processing, qubits can organized arbitrarily into groups of multiplexed qubits which are played sequentially. To be concrete, imagine we had the following multiplexed groups:
Group 1: "q2", "q4"
Group 2: "q1", "q3"
So not only will:
for i, multiplexed_qubits in qubits.batch():
not work, because qubits.batch()
is a list of dictionaries, not a list of tuples, but it would give incorrect behaviour, even if you did
for i, multiplexed_qubits in enumerate(qubits.batch()):
the variable i
doesn't correspond with the qubit being measured, but the index of the multiplexed group.
This is the reason that multiplexed_qubits
is actually a dictionary of qubits, each with a key corresponding to it's index in the original qubits list. It In the example above, it would look like this:
>>> qubits.batch()
[
{1: q2, 3: q4},
{0: q1, 2: q3}
]
So if you did:
for multiplexed_qubits in qubits.batch()
for i, qubit in multiplexed_qubits.items():
...
save(state[i], ...)
then now i
would once again be a unique identifier for that qubit, and correct stream processing is recovered
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please refactor the outer- and inner-most for loops to be like the Ramsey node so that the behaviour is correct
# Save the figure | ||
node.results = {"figure": plt.gcf()} | ||
node.machine = machine | ||
node.results = {"figure": fig, "samples": samples} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You added this line twice, please remove one.
And remove the redundant comments
# Save the figure | ||
node.results = {"figure": plt.gcf()} | ||
node.machine = machine | ||
node.results = {"figure": fig, "samples": samples} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You added this line twice, please remove one.
And remove the redundant comments
) | ||
} | ||
) | ||
ds = fetch_dataset(job, qubits, state_discrimination=state_discrimination, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer if this node just takes node.parameters
as an input, and if amps
and N_pi_vec
are re-generated using the new functions you implemented, and state_discrimination
is taken directly.
} | ||
) | ||
ds = fetch_dataset(job, qubits, state_discrimination=state_discrimination, | ||
operation=operation, amps=amps, N_pi_vec=N_pi_vec) | ||
else: | ||
node = node.load_from_id(node.parameters.load_data_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI I noticed a bug here:
if node.parameters.load_data_id is None:
ds = fetch_dataset(job, qubits, node.parameters)
else:
node = node.load_from_id(load_data_id)
ds = node.results["ds"]
...
if node.load_data_id is None:
# do state updates
This format was meant to protect against having analyzed historical data updating your current state. But notice, that once you re-load the historical data, you also reload the node.parameters.load_data_id
which was set to None
when the node was executed the first time. So, analyzing historical data was always overwriting your current state.json in the current, incorrect format.
To fix it, we need to do something like this:
load_data_id = node.parameters.load_data_id
node = node.load_from_id(load_data_id)
node.parameters.load_data_id = load_data_id
fit_results = fit_pi_amplitude(ds, N_pi, state_discrimination, qubits, operation, N_pi_vec) | ||
node.results["fit_results"] = fit_results | ||
# %% {Plotting} | ||
fig = plot(ds, qubits, fit_results, N_pi, state_discrimination) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer if the plotting function wasn't just called plot
, but instead something like:
plot_rabi_oscillations
if that's what it's doing.
def fit_pi_amplitude(ds: xr.Dataset, N_pi: int, state_discrimination: bool, qubits: List[Transmon], operation: str, N_pi_vec: np.ndarray) -> Dict: | ||
|
||
""" | ||
Fits the Pi pulse amplitude for a given dataset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The indentation here in the docstring is messed up
Fetches raw ADC data from the OPX and processes it into an xarray dataset with labeled coordinates and attributes. | ||
|
||
arguments: | ||
- job (QmJob): the job object containing the results. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think the arguments are as important to be documented as the coordinates of the Dataset that will be returned. Consider removing them from the documentation or moving the coordinates to the top of the docstring.
|
||
from quam_libs.experiments.node_parameters import DataLoadableNodeParameters, FluxControlledNodeParameters, MultiplexableNodeParameters, QmSessionNodeParameters, QubitsExperimentNodeParameters, SimulatableNodeParameters | ||
|
||
class Parameters( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Take a look at the Ramsey Node parameters class and notice the following
I created a separate class called RamseyParameters
which gets inherited by the final Parameters
class used in the node.
The order of inheritance is actually very important when using dataclasses like these, because it determines which order the arguments will be provided. For Ramsey, I put it here:
class Parameters(
NodeParameters,
SimulatableNodeParameters,
DataLoadableNodeParameters,
QmSessionNodeParameters,
RamseyParameters, <------------------
FluxControlledNodeParameters,
MultiplexableNodeParameters,
QubitsExperimentNodeParameters,
):
Since when the order of the input arguments are built, they will be built from bottom-to-top, the RamseyParameters
will be added 4th, leading to something that looks like this:
This means qubits are first, then whether you want it multiplexed, then flux control, then RamseyParameters
. I like this order, especially since the lesser-used parameters (simulation, data-loading, etc.) are at the bottom, and think it should be kept consistent for all refactored nodes.
Can you do something like this for PowerRabi
?
|
||
""" | ||
Plots the Rabi oscillation results and fit for a given dataset and qubits. | ||
Parameters: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, fix indenting
No description provided.