Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support PG array dimensionality #411

Merged
merged 7 commits into from
Jan 3, 2024
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 36 additions & 2 deletions recap/clients/postgresql.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@

from recap.clients.dbapi import Connection, DbapiClient
from recap.converters.postgresql import PostgresqlConverter
from recap.types import StructType

PSYCOPG2_CONNECT_ARGS = {
"host",
Expand Down Expand Up @@ -48,8 +49,12 @@


class PostgresqlClient(DbapiClient):
def __init__(self, connection: Connection) -> None:
super().__init__(connection, PostgresqlConverter())
def __init__(
self,
connection: Connection,
converter: PostgresqlConverter = PostgresqlConverter(),
) -> None:
super().__init__(connection, converter)

@staticmethod
@contextmanager
Expand Down Expand Up @@ -78,3 +83,32 @@ def ls_catalogs(self) -> list[str]:
"""
)
return [row[0] for row in cursor.fetchall()]

def schema(self, catalog: str, schema: str, table: str) -> StructType:
cursor = self.connection.cursor()
cursor.execute(
f"""
SELECT
information_schema.columns.*,
pg_attribute.attndims
FROM information_schema.columns
JOIN pg_catalog.pg_namespace
ON pg_catalog.pg_namespace.nspname = information_schema.columns.table_schema
JOIN pg_catalog.pg_class
ON pg_catalog.pg_class.relname = information_schema.columns.table_name
AND pg_catalog.pg_class.relnamespace = pg_catalog.pg_namespace.oid
JOIN pg_catalog.pg_attribute
ON pg_catalog.pg_attribute.attrelid = pg_catalog.pg_class.oid
AND pg_catalog.pg_attribute.attname = information_schema.columns.column_name
WHERE table_name = {self.param_style}
AND table_schema = {self.param_style}
AND table_catalog = {self.param_style}
ORDER BY ordinal_position ASC
""",
(table, schema, catalog),
)
names = [name[0].upper() for name in cursor.description]
return self.converter.to_recap(
# Make each row be a dict with the column names as keys
[dict(zip(names, row)) for row in cursor.fetchall()]
)
67 changes: 46 additions & 21 deletions recap/converters/postgresql.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
FloatType,
IntType,
ListType,
NullType,
ProxyType,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Niiiice. Took me a sec to grok why we didn't need this anymore. Fully walking the n_dimensions means we don't need self-references. Awesome.

One question/nuance here: the PG dimensions are just a suggestion.

The current implementation does not enforce the declared number of dimensions either. Arrays of a particular element type are all considered to be of the same type, regardless of size or number of dimensions. So, declaring the array size or number of dimensions in CREATE TABLE is simply documentation; it does not affect run-time behavior.

https://www.postgresql.org/docs/current/arrays.html

So the question is, do we want to have the Recap reflect the DB's data or its schema? My implementation (with ProxyType) reflected the data. Yours changes it to reflect the schema. Perhaps we want it configurable one as the default? WDYT?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like to think the schema is the beacon of truth for what the user intends for the column. If users are leveraging the column differently than schema's representation, they should fix the schema. But I could see past mistakes leading to a situation where this isn't true, which would then lead to recap constructing a false narrative about the data. I think making it configurable makes sense. Maybe default to ProxyType since that's the safer assumption? Would we want to add config params to the PostgresqlConverter constructor?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ya, can you add a param to the init to config. Defaulting to proxy is safer, as you say.

RecapType,
RecapTypeRegistry,
Expand All @@ -24,7 +25,15 @@


class PostgresqlConverter(DbapiConverter):
def __init__(self, namespace: str = DEFAULT_NAMESPACE) -> None:
def __init__(
self,
ignore_array_dimensionality: bool = True,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if the negation support_array_dimensionality = False or include_array_dimensions = False would be more intuitive for the user

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My vote is for 'enforce_array_dimensions'

namespace: str = DEFAULT_NAMESPACE,
):
# since array dimensionality is not enforced by PG schemas:
# if `ignore_array_dimensionality = True` then read arrays irrespective of how many dimensions they have
# if `ignore_array_dimensionality = False` then read arrays as nested lists
self.ignore_array_dimensionality = ignore_array_dimensionality
self.namespace = namespace
self.registry = RecapTypeRegistry()

Expand All @@ -34,6 +43,7 @@ def _parse_type(self, column_props: dict[str, Any]) -> RecapType:
octet_length = column_props["CHARACTER_OCTET_LENGTH"]
max_length = column_props["CHARACTER_MAXIMUM_LENGTH"]
udt_name = (column_props["UDT_NAME"] or "").lower()
ndims = column_props["ATTNDIMS"]

if data_type in ["bigint", "int8", "bigserial", "serial8"]:
base_type = IntType(bits=64, signed=True)
Expand Down Expand Up @@ -102,29 +112,44 @@ def _parse_type(self, column_props: dict[str, Any]) -> RecapType:
# * 8 because bit columns use bits not bytes.
"CHARACTER_MAXIMUM_LENGTH": MAX_FIELD_SIZE * 8,
"UDT_NAME": None,
"ATTNDIMS": 0,
}
)
column_name_without_periods = column_name.replace(".", "_")
base_type_alias = f"{self.namespace}.{column_name_without_periods}"
# Construct a self-referencing list comprised of the array's value
# type and a proxy to the list itself. This allows arrays to be an
# arbitrary number of dimensions, which is how PostgreSQL treats
# lists. See https://github.com/recap-build/recap/issues/264 for
# more details.
base_type = ListType(
alias=base_type_alias,
values=UnionType(
types=[
value_type,
ProxyType(
alias=base_type_alias,
registry=self.registry,
),
],
),
)
self.registry.register_alias(base_type)
if self.ignore_array_dimensionality:
column_name_without_periods = column_name.replace(".", "_")
base_type_alias = f"{self.namespace}.{column_name_without_periods}"
# Construct a self-referencing list comprised of the array's value
# type and a proxy to the list itself. This allows arrays to be an
# arbitrary number of dimensions, which is how PostgreSQL treats
# lists. See https://github.com/recap-build/recap/issues/264 for
# more details.
base_type = ListType(
alias=base_type_alias,
values=UnionType(
types=[
value_type,
ProxyType(
alias=base_type_alias,
registry=self.registry,
),
],
),
)
self.registry.register_alias(base_type)
else:
base_type = self._create_n_dimension_list(value_type, ndims)
else:
raise ValueError(f"Unknown data type: {data_type}")

return base_type

def _create_n_dimension_list(self, base_type: RecapType, ndims: int) -> RecapType:
"""
Build a list type with `ndims` dimensions containing nullable `base_type` as the innermost value type.
"""
if ndims == 0:
return UnionType(types=[NullType(), base_type])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious about this one. It seems right, but I'm not 100% sure. As I read it, there are a few things:

  1. DbapiConverter handles root-level NULLABLE fields (https://github.com/recap-build/recap/blob/main/recap/converters/dbapi.py#L15-L16)
  2. This code here handles NULLABLE items in a PG ARRAY field.

I think this is the right behavior. But I'm curious: are PG arrays always allowed NULLs in their dimensional values? I couldn't find good docs on this. I haven't tested it out.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did some testing and digging, and afaict the answer is yes- the innermost value can always be null. Enforcing non-nulls requires adding some sort of validation to CHECK against https://stackoverflow.com/a/59421233. Which seems like a pretty challenging rabbit hole of digging through information_schema.check_constraints

else:
return ListType(
values=self._create_n_dimension_list(base_type, ndims - 1),
)
Loading