You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The benchmark results history is getting a bit confusing. The plots are supposed to quickly draw the eye to performance jumps, but the majority of jumps are just due to environment changes. Example: SolarPositionNumba.time_sun_rise_set_transit_spa
Although changes in performance due to environment is surely interesting, I think the main goal for our benchmarks should be for evaluating performance changes in pvlib itself (holding the environment ~constant). Should we wipe out and rebuild the history using the current set of environments?
We may also want to change our environment management strategy to reduce future fragmentation.
The text was updated successfully, but these errors were encountered:
Hard to decipher with that many traces. Do we need the 3 time options (1, 10, 100 days)? I agree that the main purpose is to track changes in pvlib code performance.
The benchmark results history is getting a bit confusing. The plots are supposed to quickly draw the eye to performance jumps, but the majority of jumps are just due to environment changes. Example: SolarPositionNumba.time_sun_rise_set_transit_spa
Although changes in performance due to environment is surely interesting, I think the main goal for our benchmarks should be for evaluating performance changes in pvlib itself (holding the environment ~constant). Should we wipe out and rebuild the history using the current set of environments?
We may also want to change our environment management strategy to reduce future fragmentation.
The text was updated successfully, but these errors were encountered: