-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add check for schema read compatibility #554
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #554 +/- ##
==========================================
+ Coverage 83.92% 84.07% +0.14%
==========================================
Files 75 76 +1
Lines 17277 17526 +249
Branches 17277 17526 +249
==========================================
+ Hits 14500 14735 +235
- Misses 2078 2079 +1
- Partials 699 712 +13 ☔ View full report in Codecov by Sentry. |
); | ||
name_equal && nullability_equal && data_type_equal | ||
} | ||
None => read_field.is_nullable(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The None
case is a point at which I differ from the delta implementation. I'm not convinced by the code there. If we don't find the read field in the existing schema, then we just ignore it. I think this should only pass if the new field in the read schema is nullable.
I may be missing something tho 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nullability is... complicated. But I think what you say makes sense -- technically it could be ok for the read field to not be nullable, if the parent is nullable and the parent is null for all rows where the child is null. But if the parent is hard-wired null then we shouldn't be recursing to its children in the first place.
// == read_nullable || !existing_nullable | ||
read_nullable || !existing_nullable | ||
} | ||
fn is_struct_read_compatible(existing: &StructType, read_type: &StructType) -> bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was considering using a DeltaResult
instead of a bool so we can return better errors about how a schema differs. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that makes sense. Something similar to ValidateColumnMappings in #543, which returns an Err
with the offending column name path?
); | ||
name_equal && nullability_equal && data_type_equal | ||
} | ||
None => read_field.is_nullable(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nullability is... complicated. But I think what you say makes sense -- technically it could be ok for the read field to not be nullable, if the parent is nullable and the parent is null for all rows where the child is null. But if the parent is hard-wired null then we shouldn't be recursing to its children in the first place.
use crate::schema::{DataType, Schema, StructField, StructType}; | ||
|
||
fn is_nullability_compatible(existing_nullable: bool, read_nullable: bool) -> bool { | ||
// The case to avoid is when the read_schema is non-nullable and the existing one is nullable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"avoid" as in "it's illegal to attempt reading a nullable underlying as non-nullable"? (maybe just say that?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, this method takes two args of the same type, but it is not commutative. Subtly error-prone, and I don't know the best way to make it safe? The arg names are a good start, but rust doesn't allow named args at call sites. And the name of the function does not give any indication of the correct arg order.
Is it worth using a struct just to force named args? Seems clunky. Or maybe we can choose an asymmetric function name of some kind, that indicates which arg comes first?
(whatever solution we choose, we should probably apply it to the is_struct_read_compatible
as well)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another possibility: Add these as methods on StructType
itself? Then callers would be encouraged to do things like:
table_schema.can_read_as(read_schema)
... but I don't know a good way to do that for the nullability compat check since it's a plain boolean and doesn't always apply to a struct field (can also be array element or map value).
We could define a helper trait for struct/map/array, but that just pushes the problem to the trait impl (and there is only one call site for each type right now).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think the nullability check could benefit from using a struct. Pushing the impl to a trait would just have the same check in all of them anyway.
let existing_names: HashSet<String> = existing | ||
.fields() | ||
.map(|field| field.name().clone()) | ||
.collect(); | ||
let read_names: HashSet<String> = read_type | ||
.fields() | ||
.map(|field| field.name().clone()) | ||
.collect(); | ||
if !existing_names.is_subset(&read_names) { | ||
return false; | ||
} | ||
read_type | ||
.fields() | ||
.all(|read_field| match existing_fields.get(read_field.name()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like we should only need to materialize a hash set for one side (build), and just stream the other side's fields past it (probe)?
Also: kernel's StructType::fields
member is already an IndexMap
so you should have O(1) name lookups without building any additional hash sets.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing that was missing previously in this code was that they check the fields modulo case-sensitivity. The reasons delta spark does this AFAICT are:
- they want to ensure that there are no duplicate fields in a schema that only differ in case sensitivity.
- delta-spark typically ignores fields that are in the read schema, but not the current one as I pointed out above. However, they don't want fields that only differ in case to be treated as a new struct field, so they use a case-insensitive map.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The two things I'm most unsure about currently are
- How the case sensitive cases are handled
- The case where the field is in the read schema but not the existing schema
Would appreciate a second pair of eyes from @zachschuermann and @nicklan as well.
ba3d0ad
to
b518e25
Compare
TODO: Add doc comments. I think I want those uncertainties cleared before spending time on docs |
c6deb44
to
596aa04
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
flushing first pass comments. will review more soon
kernel/src/schema_compat.rs
Outdated
use crate::utils::require; | ||
use crate::{DeltaResult, Error}; | ||
|
||
struct NullabilityCheck { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: doc comment
kernel/src/schema_compat.rs
Outdated
} | ||
} | ||
|
||
impl StructField { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i need to think about about doing this as multiple impl blocks. it keeps the code nice and separate, but does make it more complex to find where various bits of code are. at the very least can you not import StructField
, StructType
, or DataType
and do impl crate::schema::StructField
so it's clear that it's on that struct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can do that. In terms of the motivation for separate impl blocks, it goes back to Ryan's suggestion to make it clear what order you're passing in arguments to the compatibility check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I removed the import and switched to using Self. lmk if that looks good.
kernel/src/schema_compat.rs
Outdated
nullable: bool, | ||
read_nullable: bool, | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a benefit to making this a struct and then having an impl
? Seems like it'd be easier/cleaner as just a two arg method from the usage i've seen so far.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is related to what @scovich was saying here. A nullability check function would not be commutative, and this could easily cause subtle errors with a swap. We can't do impls for primitive type like bool
, so bool.can_read_as(other)
is off the table (and perhaps isn't a good idea anyway).
I could alternatively do something like this:
// Option A
struct NullabilityFlag(bool);
struct ReadNullabilityFlag(bool);
fn is_compatible(a: NullabilityFlag, b: ReadNullabilityFlag) {
let NullabilityFlag(nullable) = a;
let ReadNullabilityFlag(read_nullable) = b;
...
}
is_compatible(NullabilityFlag(self.nullable), ReadNullabilityFlag(read_field.nullable));
// Option B:
NullabilityFlag(self.nullable).can_read_as(NullabilityFlag(read_field.nullable)) // no need for read version
Looking at these layed out, I think Option B is the winner . What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update: switched to option B for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
few more comments.
it's a shame we can't use EnsureDataTypes
for this. That expects to compare between kernel and arrow types which doesn't work here. We might want to consider if it's worth converting the arrow types into kernel types before doing that check and then re-writing that check in terms of only kernel types. It'd be a little less efficient, but we wouldn't have two somewhat tricky ways of checking compatibility.
kernel/src/schema_compat.rs
Outdated
//! compatibility is [`can_read_as`]. | ||
//! | ||
//! # Examples | ||
//! ```rust, ignore |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why ignore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is pub(crate)
and doc tests only work with pub
functions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doc tests compile with the doctest
feature enabled, so we could potentially make the function pub in that case.
We could also invoke doctests with developer-visibility
feature enabled, to pick up that set of functions as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like dev-visibility worked!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update on dev-visibility, it doesn't seem to work when calling cargo test
from the kernel crate. Seems to work from the root of the repository. Spent ~ an hour reading up on cargo/doctest docs and couldn't figure it. In the end I decided to set the doctest back to ignore.
kernel/src/schema_compat.rs
Outdated
.collect(); | ||
require!( | ||
field_map.len() == self.fields.len(), | ||
Error::generic("Delta tables don't allow field names that only differ by case") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like it would be a bug in the schema of the table. Should we not catch this case higher up when trying to construct the schema?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed that this is a weird place to put it. I was keeping parity with delta-spark. Perhaps instead we should bake this into a StructType::try_from_string
. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like clearer and earlier checks, when possible. The fewer places we have to worry about unvalidated objects the better. Spark is definitely not a model to emulate in that regard.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Made an issue to track this here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good thanks. I marked it as a good first issue, as those are nice to have
kernel/src/schema_compat.rs
Outdated
); | ||
|
||
// Check that the field names are a subset of the read fields. | ||
if !field_map.keys().all(|name| read_field_names.contains(name)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the scala code they have allowMissingColumns
, which does allow dropping of columns. I'm not quite clear when you'd want to set that or not though. In the case of CDF though, why isn't it okay for us to have dropped a column? Assuming we're reading as the final schema (which I think is the case), then if there were extra columns when we started, we just... don't return those?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the scala code they have allowMissingColumns, which does allow dropping of columns.
Regarding the flags, I chatted with @zachschuermann and we decided to do the simple implementation in this round without all the configurations that the scala code provides. For CDF specifically, it seems that it only ever uses the forbidTightenNullability
flag and none of the others. I also think we may not need the nullability flag because we only use the final schema.
Assuming we're reading as the final schema (which I think is the case),
That's correct
the case of CDF though, why isn't it okay for us to have dropped a column? [...] if there were extra columns when we started, we just... don't return those?
I think the reason is the output CDF of such a table may not make sense. Consider a table with columns a and b. The final schema only has column a. You could imagine all of the changes in the change data feed are made to b, but if you read with a as the only column you get:
{(_change_type: update_preimage, a: 1),
(_change_type: update_postimage, a: 1)}
Seems that nothing's changed at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More generally: I think that if data is written in schema A, and we read it with schema B, there should not be any data loss. Hence A's columns should be a subset of B's columns. Dropping a column is essentially a projection, and they should be explicitly requested by the user/query.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AFAIK, clients must anyway be prepared to deal with spurious row changes? It can also happen if we don't distinguish copied rows from updated ones. It can also happen if the reader only cares about a subset of columns that didn't happen to change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@scovich so are you saying that it actually might be acceptable to drop the columns that aren't in the end schema?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like it? In my experience most users would prefer spurious changes if the alternative is pipeline failure.
It would be nice to get confirmation from some CDF workload experts tho. If delta-spark drops columns in this case that's probably an indirect confirmation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So to my understanding, they don't allow columns to be dropped in cdf. Their schema utils function let's you specify that you tolerate dropped columns, but CDF never uses it.
Here are the only call sites for schema compat in CDF 1 & 2. They change forbidTightenNullability
, but all other flags are default.
SchemaUtils.isReadCompatible(
existingSchema = metadata.schema,
readSchema = readSchemaSnapshot.schema,
forbidTightenNullability = true)
Note that allowMissingColumns
is false
by default.
def isReadCompatible(
existingSchema: StructType,
readSchema: StructType,
forbidTightenNullability: Boolean = false,
allowMissingColumns: Boolean = false,
allowTypeWidening: Boolean = false,
newPartitionColumns: Seq[String] = Seq.empty,
oldPartitionColumns: Seq[String] = Seq.empty): Boolean = {
Here we see that if we do not allow missing columns, then the schema fields should be a subset of the read schema (ie no dropped columns).
if (!allowMissingColumns &&
!(existingFieldNames.subsetOf(newFields) &&
isPartitionCompatible(newPartitionColumns, oldPartitionColumns))) {
// Dropped a column that was present in the DataFrame schema
return false
}
Note also that CDF doesn't use the partitionColumns parts of the schema comparison.
I'm planing on talking to some of the folks who worked on this in the past, but I believe what I have currently matches the CDF behaviour for delta-spark.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we are matching current delta-spark and in the future this would be an expansion of functionality (allow more than what we did before) then it seems like we can move forward and perhaps create an issue to revisit this?
kernel/src/schema_compat.rs
Outdated
self_map.value_type().can_read_as(read_map.value_type())?; | ||
} | ||
(a, b) => { | ||
// TODO: In the future, we will change this to support type widening. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
check_cast_compat
basically does this but for two arrow types. just fyi
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mentioned in this issue: #623
@nicklan Ooh good callout that EnsureDataTypes exists. I'll take a closer look at it and see if it matches our needs. |
15173fe
to
1fa87eb
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm with the follow-ups. thanks!
kernel/src/schema_compat.rs
Outdated
.collect(); | ||
require!( | ||
field_map.len() == self.fields.len(), | ||
Error::generic("Delta tables don't allow field names that only differ by case") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good thanks. I marked it as a good first issue, as those are nice to have
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, left a handful of comments! Schema compatibility/comparison is certainly an intricate subject - thanks @OussamaSaoudi!
kernel/src/lib.rs
Outdated
@@ -87,6 +87,7 @@ pub mod table_properties; | |||
pub mod transaction; | |||
|
|||
pub(crate) mod predicates; | |||
pub(crate) mod schema_compat; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
organization nit: could we call it compare
etc.? and just put as a sub-module of the existing schema
module?
kernel/src/schema_compat.rs
Outdated
//! StructField::new("id", DataType::LONG, false), | ||
//! StructField::new("value", DataType::STRING, true), | ||
//! ]); | ||
//! let read_schema = StructType::new([ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: maybe just leave a quick comment here in the docstring that says something like "schemas are compatible since the 'unknown' column is nullable"
kernel/src/schema_compat.rs
Outdated
// == read_nullable || !existing_nullable | ||
require!( | ||
read_nullable.0 || !self.0, | ||
Error::generic("Read field is non-nullable while this field is nullable") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe useful to make a new schema
error type? (comment applies to many places below)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. I'll try creating a SchemaCompareResult
or something to that effect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@zachschuermann Added an error type for the module schema::compare
.
kernel/src/schema_compat.rs
Outdated
impl NullabilityFlag { | ||
/// Represents a nullability comparison between two schemas' fields. Returns true if the | ||
/// read nullability is the same or wider than the nullability of self. | ||
fn can_read_as(&self, read_nullable: NullabilityFlag) -> DeltaResult<()> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we impl fn can_read_as()
for a number of structs, I wonder if it would be useful to define a SchemaComparison
trait which has can_read_as
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the can_read_as
could be a good first step to unifying with EnsureDataType
too. Given the new error type too, I think the trait will keep the format nice and consistent.
kernel/src/schema_compat.rs
Outdated
} | ||
impl crate::schema::StructType { | ||
/// Returns `Ok` if this [`StructType`] can be read as `read_type` in the read schema. | ||
#[allow(unused)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assume this is used in a follow-up, but let's remember to delete. add todo? (or will clippy yell if it's used and you have an allow(unused)
I don't remember..)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clippy won't complain unfortunately 😔 I can add this as a TODO
kernel/src/schema_compat.rs
Outdated
/// Returns `Ok` if this [`StructType`] can be read as `read_type` in the read schema. | ||
#[allow(unused)] | ||
pub(crate) fn can_read_as(&self, read_type: &Self) -> DeltaResult<()> { | ||
let field_map: HashMap<String, &crate::schema::StructField> = self |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any reason to not import crate::schema::StructField
? (also if we move modules this will be a non-issue)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think nick was worried it wouldn't be clear which schema, but now that it's under the schema
module it should be obvious.
kernel/src/schema_compat.rs
Outdated
); | ||
|
||
// Check that the field names are a subset of the read fields. | ||
if !field_map.keys().all(|name| read_field_names.contains(name)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we are matching current delta-spark and in the future this would be an expansion of functionality (allow more than what we did before) then it seems like we can move forward and perhaps create an issue to revisit this?
kernel/src/schema_compat.rs
Outdated
); | ||
|
||
let read_field_names: HashSet<String> = | ||
read_type.fields.keys().map(|x| x.to_lowercase()).collect(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for this and usage below: perhaps for now we can specify this is a lowercase hashset? but in the future I hope we can do something better to enforce the case-insensitivity instead of relying on to_lowercase
before interaction with the set?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed the name to reflect that In terms of future, Idk how else to do this check. Delta-spark uses a case-insensitive map which I don't think we should add to kernel. Also for clarity, the lowercase is being used for two reasons:
- To ensure that the schema is valid by asserting that it doesn't have duplicate fields that differ by case
- To find fields shared between this schema and the read schema. Note that we later assert that the field names are exactly the same.
kernel/src/schema_compat.rs
Outdated
} | ||
|
||
#[cfg(test)] | ||
mod tests { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if there are some existing useful tests in delta-spark perhaps we can have an issue to evaluate/migrate those?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created issue: #655
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
actually one final comment on a few tests to add. Also might help to add actual error checks instead of just is_err
since it may not be error-ing in the place expected.
kernel/src/schema_compat.rs
Outdated
StructField::new("name", DataType::STRING, false), | ||
StructField::new("age", DataType::INTEGER, true), | ||
]); | ||
assert!(existing_schema.can_read_as(&read_schema).is_err()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could add the other way around to catch the other case (only checks existing_schema
in this and fails)
kernel/src/schema_compat.rs
Outdated
StructField::new("age", DataType::INTEGER, true), | ||
]); | ||
assert!(existing_schema.can_read_as(&read_schema).is_err()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we add a test for when the read field names aren't a subset of the original Schema names?
e.g. read {a, b, c}
for struct/schema {a, b}
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worked this in the new differ_by_nullable_column
and differ_by_non_nullable_column
. It checks both directions.
1fa87eb
to
d0f3ebc
Compare
kernel/src/schema/schema_compare.rs
Outdated
/// The nullability flag of a schema's field. This can be compared with a read schema field's | ||
/// nullability flag using [`NullabilityFlag::can_read_as`]. | ||
#[allow(unused)] | ||
pub(crate) struct NullabilityFlag(bool); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NullabilityFlag
is a bit of a mouthful, would it make sense to just call it Nullable
? As in Nullable(true)
vs. Nullable(false)
? We could even potentially replace the bare nullable: bool
arg of e.g. StructField
, to eliminate the remaining (admittedly small) call site ambiguity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can_read_as
has been moved to the trait. If we wantimpl Into<Nullable>
we'd have to change the SchemaComparison
trait to have the method type: fn can_read_as(&self, read_type: impl Into<Self>) -> SchemaComparisonResult
. I'm not sure I like using impl Into for this. I've had issues before where impl Into in a trait impl covers too many cases, and it would've forced a breaking change.
You mentioned the option to have StructField.nullable: Nullable
. I like this, but it seems like we're changing the entire StructType/StructField type just to support Nullability checks. Is there any other reason we may want to have Nullable be part of StructField?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can_read_as
has been moved to the trait
Ah, I missed that detail. Agree we shouldn't change it.
As for StructField
: I don't think we need the change.
The nullable: bool
field was already very clear, and the introduction of StructField::nullable
and not_null
methods eliminated dozens of call sites for StructField::new
that passed bare boolean literals.
So really, my suggestion was spurred by a stale memory (predating those helpers). Back then, requiring StructField::new
to take a Nullable
instead of bool
would have made call sites much easier to understand, e.g.:
StructField::new("name", data_type, Nullable(false))
What changes are proposed in this pull request?
This PR introduces the
schema::compare
module to handle schema compatibility comparisons through theSchemaComparison
trait. Using the trait'scan_read_as
method, you can check that data with a given schema can be read using another schema. This check is useful for implementing schema evolution checks in CDF.The
SchemaComparison
trait is implemented forDataType
,StructType
, and the newly addedNullable
which represents a nullability bool.Closes #523
How was this change tested?
Schema compatibility tests are added that check the following:
can_read_as
is reflexive