Thanks for your input.
To complement your findings: Chevereto v4.3 introduced xxh64 client side file hashing when uploading files larger than CHEVERETO_MAX_CHUNK_UPLOAD_SIZE
. It does this to early hint the server (before uploading all the chunks) to detect duplicate uploads.
This system is really fast as the hashing runs in WebAssembly, and it uses the file stream API for progressive hashing. It doesn't read the entire file, it process the thing doing a stream. It is fast as it gets, also memory efficient.
The problem is that the underlying filesystem dramatically affects the reliability of this feature when handling several files, also the browser. What I've found in the wild is that while browsers support the same APIs, the way they handle the load is way different, depend on each case.
For now, what you can do is to bump the chunk max size limit, that way it doesn't trigger the chunked upload process at all and behaves like the previous version.