-
-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add more multipleOf tests #413
base: main
Are you sure you want to change the base?
Conversation
3546015
to
82b27d8
Compare
There were already some tests for fractional multipleOf values, but while writing an implementation I discovered that those were not enough to catch a certain kind of poor implementations. As JSON encodes arbitrary numbers in decimal form, the correct implementation should either use lossless decimal presentation, or at least operate in "shortest decimal which is presented as A is multiple of shortest decimal which is presented as B" to be conforming.
82b27d8
to
1227aeb
Compare
I don't think this is actually the case -- definitely the spec doesn't mandate using decimals. And "shortest decimal which is presented as A is multiple of shortest decimal which is presented as B" -- an implementation isn't required to assume so either as far as I know. There's not really any way to assume if you give me To be specific, my implementation fails these under the default behavior. My response to users is always "JSON Schema generally sits downstream of your JSON parser. If you want decimals, then parse into decimals and things will work as you expect, otherwise you get float division behavior, which some folks may very well want". I suspect many if not most implementations are in the same boat. Adding these to optional seems reasonable for implementations that want to test decimal behavior (certainly seems useful) -- but yeah what makes you say this behavior is required? |
@Julian |
If JSON Schema validators should not be expected to work reliably with even trivial non-integer values like |
That is not correct, there is a way and I described it, which will pass here even if numbers are parsed as half-precision floats:
If this was correct, then floating point numbers wouldn't have been convertible back to decimal anywhere. But they are (if precision allows, which is certainly true for these numbers). 0.6 parsed from string "0.6" is always convertible back to "0.6" as a string, in all implementations that I'm aware of. It's arithmetic operations that skew things -- but they are not required here pre-division. This means that it's also convertible to a decimal, which means that this |
Will review your comments, but just since I already see the first one:
Good question :) -- probably indeed a mistake (according to my interpretation)! |
That is a question of which numbers are representable in the system, which is a completely different issue from this test. I.e.: |
My point was a JSON Schema implementation being used alongside a JSON parser that parses into floats (and then the JSON parser encounters "0.6" in JSON, and parses it) -- will see the float |
As an demonstration: you won't be opposed to tests for Those are entirely similar in my point of view, this operation is well-defined for representable numbers, because JSON deals with decimal strings and we can expect that the number came in as a decimal. This test uses Unrepresentable numbers are a completely different issue from the thing this test is checking. |
I'm not sure I'm following you, so we may need to chat quickly on Slack just so I fundamentally understand what you're saying, because yeah I'm having difficulties (which is likely me being misinformed, or not giving enough attention yet). But I don't understand a sentence like:
JSON allows implementations to parse both into decimals or into floats, yes? And JSON Schema implementations in general in practical places deal with language level objects, not the original strings, yes? So an arbitrary JSON Schema implementation does not know that you wrote the string "0.6" in the schema, which isn't representable as a float, if the JSON parser gives it something else, right? |
That's incorrect. 0.6 is representable as a float (knowing that it came from decimal). String "0.6" converts to float 0.6 and vice versa.
Afaik JSON just decribes the interchange format. It doesn't care about floats (or even precision).
Same is true for literal numbers in the languages, i.e. this should as well work with literal |
Floats are not trivial, no matter how simple the number appears in base 10 :) |
FWIW, 3.3 / 1.1 = 2.99999999999999956 on my system (x86_64 darwin) |
My current inclination is to move all non-integer multipleOf tests into the optional directory, because real-world implementations may experience limitations in the same vein as the optional/bigNum.json tests. |
I made all these tests pass much more easily -- by checking if rounding the quotient to 16 decimal places was an integer. i.e. in perl: |
There is a difference between the Number.isSafeInteger(0.0075 / 0.0001) // is `true` <-- !!!
0.0075 % 0.0001 === 0 // is `false` Number.isSafeInteger(0.3 / 0.1) // is `false`
0.3 % 0.1 === 0 // is `false` Seeing how much confusion (and actual interoperability problems) is caused by the We need as many tests for this keywords as we can get! |
The modulo operator is not valid for non-integers on the RHS. Attempts to implement the |
|
There were already some tests for fractional
multipleOf
values, but while writing an implementation I discovered that those werenot enough to catch a certain kind of poor implementations.
As JSON encodes arbitrary numbers in decimal form, the correct implementation should either use lossless decimal presentation, or at least operate in "shortest decimal which is presented as A is multiple of shortest decimal which is presented as B" to be conforming.
With these small numbers, that is definitely possible even for systems where numbers are presented as half-precision floating-point values.