Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

to_numpy() is more memory intensive than to_pandas() with string or categorical columns #20765

Open
2 tasks done
francescomandruvs opened this issue Jan 17, 2025 · 0 comments
Open
2 tasks done
Labels
bug Something isn't working needs triage Awaiting prioritization by a maintainer python Related to Python Polars

Comments

@francescomandruvs
Copy link

Checks

  • I have checked that this issue has not already been reported.
  • I have confirmed this bug exists on the latest version of Polars.

Reproducible example

import numpy as np
import random
import string
import polars as pl

num_rows = 300_000
num_num_features = 900
num_cat_features = 50
num_str_features = 50

categories = ['A', 'B', 'C', 'D', 'E']
string_pool = [''.join(random.choices(string.ascii_letters, k=8)) for _ in range(1000)]

num_data = {f'num_feature_{i}': np.random.randn(num_rows) for i in range(num_num_features)}
cat_data = {f'cat_feature_{i}': pl.Series(np.random.choice(categories, num_rows)).cast(pl.Categorical) for i in range(num_cat_features)}
str_data = {f'str_feature_{i}': np.random.choice(string_pool, num_rows) for i in range(num_str_features)}

data = {**num_data, **cat_data, **str_data}

df = pl.DataFrame(data)

print(df.tail())

%memit df.to_numpy()
%memit df.to_pandas()

Log output

to_numpy: peak memory: 14616.80 MiB, increment: 11713.10 MiB
to_pandas: peak memory: 5555.46 MiB, increment: 2417.70 MiB

Issue description

I don't know if this is a real bug or a limitation of to_numpy() method.
I benchmarked the performance of to_numpy() versus to_pandas() on a dataset containing three different data types (float, string, and categorical). When the dataset consists solely of numerical columns, both methods show similar performance. However, when string or categorical columns are introduced, to_numpy() becomes significantly more memory-intensive.

Expected behavior

I'm expecting same memory usage but this depends on how numpy and pandas deal with strings and categorical columns.

Installed versions

--------Version info---------
Polars:              1.17.1
Index type:          UInt32
Platform:            Windows-10-10.0.26100-SP0
Python:              3.11.9 (tags/v3.11.9:de54cf5, Apr  2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
LTS CPU:             False

----Optional dependencies----
<not installed>ager  
<not installed>      
1.35.77              
3.1.0pickle          
<not installed>      
<not installed>      
<not installed>      
2024.10.0            
<not installed>      
2.36.0.auth          
<not installed>      
3.9.3otlib           
1.6.0asyncio         
1.26.4               
<not installed>      
2.2.3s               
18.1.0w              
<not installed>      
<not installed>      
2.0.37hemy           
<not installed>      
<not installed>      
<not installed>      
@francescomandruvs francescomandruvs added bug Something isn't working needs triage Awaiting prioritization by a maintainer python Related to Python Polars labels Jan 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working needs triage Awaiting prioritization by a maintainer python Related to Python Polars
Projects
None yet
Development

No branches or pull requests

1 participant