Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added support for additional estimators for multiseries datasets #4385
Added support for additional estimators for multiseries datasets #4385
Changes from 6 commits
8ecc59b
3f0fb8a
ed7e0c9
382cec2
443f7a5
fa3dec8
b7b6bf9
6f765bb
e49dfa5
701eef6
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can make this section more concise/easier to maintain by folding the single series case into the multiseries case by converting the single series to a dataframe and keeping this code for both cases - following the pattern in other files that already support multiseries (stl decomposer might be a good example?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, I just consolidated it.Actually I just remembered that it's structured this was so that we're still able to run
self._find_significant_lags
even when y is None. Is there a way you had in mind to structure it so that y can still be None?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, seems like y being None is something we'd want to have explicit behavior for, since right now the behavior is unclear. I think we should just handle it entirely separately
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We handle the y being null in
self._find_significant_lags
since we calculate all lags in that function (and just set significant lags to be all_lags if y is None). Should I put it off into it's own separate branch, even though the code would be identical to the case where y is a series? e.g.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, sorry for drilling into this so much, but I think I understand now. My new potentially hot take proposal is something like:
And then we can remove the handling of y being None from the static function. My argument for doing this is that calling all lags the statistically significant lags is a misnomer, since we didn't actually check statistical significance. This is me getting very into the weeds though, so I very much understand if you would rather keep things closer to the way they are 😅
Regardless, even with your new proposal, we'd still be able to combine the two non y=None cases by casting the series to a dataframe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your example makes sense to me, I don't see our behavior for
y is None
changing anytime soon so I'm comfortable with pulling that out and changing the function. Will update!There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reasoning behind setting the index and then immediately resetting the index? The value of the index shouldn't impact the order of stacking, right?
Either way, we can explicitly control the index in
stack_data
with thestarting_index
argumentThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal of this snippet is to set the index as the time index column, stack the data (and thus using the dates in the time index column to generate new stacked dates) and then resetting the index so that the resulting time index column can be used when we
pd.merge
later on in line 193.While it's possible to just copy over the time_index column from
X
after stacking, I think it's safer to just generate it from theX_unstacked
index like this as we know for sure that the X_unstacked time_index aligns with the unstacked_predictions whereas it's technically possible to have anX
time_index that's out of order (and thus would be incorrect if we simply copied over this column). I'm open to suggestions for a cleaner implementation!There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I think I understand now! I think a comment would be great. I also wonder if it would be useful to explicitly say
reset_index(drop=False)
, so that even if pandas changes their defaults we don't get screwed.My motivation here is that this is something that might be confusing to someone looking back at it in the future, since the goal isn't clear from the code itself. I hope that makes sense!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call on the reset index parameter, I'll add that in. I'll add a clarifying comment or two so that it's clear what's going on here.
Your motivation makes sense! I feel like I've been so lost in the weeds of this implementation for a while now so it's good to have multiple pairs of eyes on this to highlight what's intuitive and what isn't 😅
Check warning on line 673 in evalml/tests/automl_tests/test_default_algorithm.py
evalml/tests/automl_tests/test_default_algorithm.py#L673
Check warning on line 683 in evalml/tests/automl_tests/test_default_algorithm.py
evalml/tests/automl_tests/test_default_algorithm.py#L682-L683
Check warning on line 100 in evalml/tests/automl_tests/test_iterative_algorithm.py
evalml/tests/automl_tests/test_iterative_algorithm.py#L100
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add a comment clarifying why you're using true false instead of the decomposer name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to just parametrize the
include_decomposer
argument here - this and the below section are confusing to read out of contextThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this test, we are basically only checking the number of pipelines matches up. Before, we only needed to add the decomposer one once since there was one estimator type (VARMAX).
Now, since we have multiple estimator types, each estimator type will have one pipeline with a decomposer and another without a decomposer. As such, we need to have this [True, False] and to iterate through it in order to generate the correct number of pipelines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think a clarifying comment would be useful here 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a short one to this test