Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speedup feature selection function #168

Merged
merged 5 commits into from
Feb 5, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion src/tabpfn/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from importlib.metadata import version

from tabpfn.classifier import TabPFNClassifier
from tabpfn.regressor import TabPFNRegressor
from importlib.metadata import version

try:
__version__ = version(__name__)
Expand Down
49 changes: 28 additions & 21 deletions src/tabpfn/model/encoders.py
Original file line number Diff line number Diff line change
Expand Up @@ -100,33 +100,40 @@ def normalize_data(


def select_features(x: torch.Tensor, sel: torch.Tensor) -> torch.Tensor:
"""Select features from the input tensor based on the selection mask.
"""Select features from the input tensor based on the selection mask,
and arrange them contiguously in the last dimension.
If batch size is bigger than 1, we pad the features with zeros to make the number of features fixed.

Args:
x: The input tensor.
sel: The boolean selection mask indicating which features to keep.
x: The input tensor of shape (sequence_length, batch_size, total_features)
sel: The boolean selection mask indicating which features to keep of shape (batch_size, total_features)

Returns:
The tensor with selected features.
The shape is (sequence_length, batch_size, number_of_selected_features) if batch_size is 1.
The shape is (sequence_length, batch_size, total_features) if batch_size is greater than 1.
"""
new_x = x.clone()
for B in range(x.shape[1]):
if x.shape[1] > 1:
new_x[:, B, :] = torch.cat(
[
x[:, B, sel[B]],
torch.zeros(
x.shape[0],
x.shape[-1] - sel[B].sum(),
device=x.device,
dtype=x.dtype,
),
],
-1,
)
else:
# If B == 1, we don't need to append zeros, as the number of features can change
new_x = x[:, :, sel[B]]
B, total_features = sel.shape
sequence_length = x.shape[0]

# If B == 1, we don't need to append zeros, as the number of features don't need to be fixed.
if B == 1:
return x[:, :, sel[0]]

new_x = torch.zeros(
(sequence_length, B, total_features),
device=x.device,
dtype=x.dtype,
)

# For each batch, compute the number of selected features.
sel_counts = sel.sum(dim=-1) # shape: (B,)

for b in range(B):
s = int(sel_counts[b])
if s > 0:
new_x[:, b, :s] = x[:, b, sel[b]]

return new_x


Expand Down
Loading