-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fast byteLength() #333
Comments
I am not 100% this is correct ... but ... it's also pretty slow and I start wondering if the slowness doesn't come directly from string internal code-points: "use strict"
module.exports = (input) => {
let total = 0;
for (const c of input) {
const p = c.codePointAt(0);
if (p < 0x80) total += 1;
else if (p < 0x800) total += 2;
else total += (p & 0xD800) ? 4 : 3;
}
return total;
}; Results on my laptop:
|
@jamiebuilds can you give usecase(s) where you only care about the byte length and don't need the encoded data? |
Did some extra test to verify if the buffer creation is the reason for such slowdown and indeed this proves it: new buffer each time "use strict"
let input = require("../input")
let encoder = new TextEncoder()
module.exports = () => {
// size as worst case scenario
const ui8Array = new Uint8Array(input.length * 4);
return encoder.encodeInto(input, ui8Array).written;
} This is still faster than
Now, if there is no new buffer creation at all: "use strict"
let input = require("../input")
let encoder = new TextEncoder()
// size as worst case scenario
const ui8Array = new Uint8Array(input.length * 4);
module.exports = () => {
return encoder.encodeInto(input, ui8Array).written;
} The result is better than code points loop:
I suppose a method to just count bytes length would make it possible to have performance closer to NodeJS buffer. |
@jakearchibald I work on an end-to-end encrypted messaging app where we can't inspect the types of payloads being sent between clients on the server, so there are many places where we need to enforce a max byte length on the client to prevent certain types of abuse overloading client apps. Right now we mostly do encode the data in Node buffers but found that it would be more efficient to catch these things earlier and have the option of dropping payloads that are too large before we start doing anything with that data. After implementing some of this though, I actually found an even better way of doing this: function maxLimitCheck(maxByteSize: number) {
let encoder = new TextEncoder()
let maxSizeArray = new Uint8Array(maxByteSize + 4)
return (input: string): boolean => {
return encoder.encodeInto(input, maxSizeArray).written < maxByteSize
}
}
let check = maxLimitCheck(5e6) // 5MB
check("a".repeat(5)) // true
check("a".repeat(5e6)) // true
check("a".repeat(5e6 - 1) + "¢") // true
check("a".repeat(5e6 + 1)) // false
check("a".repeat(2 ** 29 - 24)) // false Testing this out in my benchmark repo with the max size array enforcing a couple different limits:
I still believe this is a useful function to have, there are more than 10k results for Seems like a lot of people are using it for |
I'm a little worried about offering this API as it encourages decoupling the length from the data which can have all kinds of unintended consequences. Bad legacy APIs such as We'd only support UTF-8 and as such it also seems like it would be quite straightforward (modulo surrogates) to implement this API yourself in script as you did. Still, we might as well expose it. (I also considered prefixing with |
I suspect that this API might be used to determine the length of the buffer to use for |
In Chrome DevTools we had a need for this functionality and implemented it here: https://source.chromium.org/chromium/chromium/src/+/main:third_party/devtools-frontend/src/front_end/core/platform/StringUtilities.ts;l=216-243;drc=35e203b9cb7890f5bd6ac3f818f01744728ec398 (patch) |
Maybe something that indicates that |
What problem are you trying to solve?
new TextEncoder().encode(input).byteLength
is an order of magnitude slower than alternatives, including Node'sBuffer.byteLength(input)
and even handwritten JavaScript implementations.Benchmarks
My benchmark repo includes a JS implementation that I believe is at least close enough to correct for benchmarking purposes, although I'm no expert in UTF-16 so there may be some mistakes.
What solutions exist today?
new Blob([input]).size
new TextEncoder(input).byteLength
Buffer.byteLength(input)
(Node only)How would you solve it?
Anything else?
No response
The text was updated successfully, but these errors were encountered: