Your comments

"...second time some files will fail, I wonder why that happens"

That only happens if the source code is full of bugs to the point that you need to do the same thing again because (lets say, the developer) you dont know where the bug is on your source code.


That would simple mean that TeraCopy 2.3 have a source code that was not properly developed. But, in the other case, if you have a software that was properly developed, you actually dont need to verify more than once.


Yet, there is the case about the hashing algorithms, where not all of them are "collision free". This expression means that actually hashing algorithms itself can make mistakes where it let it pass as "non corrupt" something that has changed by 1 bit, that means 2 different things will generate the same hash number.


You will understand better with these:

https://en.wikipedia.org/wiki/Collision_(computer_science)

https://en.wikipedia.org/wiki/Collision_resistance

Now comes the really shocking part, most algorithms that is offered in Teracopy actually have collision issues, and are not safe to use. Using the same hashing algorithm does not make it any better, it is only a waste of time.


"3.x.x didn't verify existing files which is shocking"

That means that this version have another kind of bug, if it dont verify properly.

this kind of function already exist in every harddrive system my friend, and when the list of badblocks reach a limit, the harddrive cant deal with badbocks anymore, it is time to dump the harddrive in the trash, and buy a new one. https://datarecovery.com/rd/what-are-p-lists-and-g-lists/

Actually, that happens to any flash memory, the more you transfer at the maximum speed, the more it overheats, and when it overheats, it start throttling.

that means they are considering/researching about it my friend, but since it is another operational system, probably many things will need to change in the software codes, and that means double or triple the work to maintain it, also, consider that there is a MacOs version being worked, so, I would not wait more, find another alternative and you will be fine.

GPUs are unpredictable at certain operations, causing corruptions, or bit flipping, while, the difference of instructions set can be great from one GPU model to another, and giving support to many GPUs, even if they are CUDA compatible, will be going a hell. Anyway, a bit flipping in a image processing is not going to make a big difference, and that is why most of times data is not retrieved from GPUs back to CPU or memory. Also, transferring data to GPU from RAM have an undesirable delay.

Not really efficient, it is best to verify at the end, for one single reason, transfers runs at higher speed when it is a "stream", if you stop each time and verify, it would take more overall time to do the entire transfer.

I dont think it is time efficient, because you are doing the same thing multiple times, plus, some hashing algorithms are slow as hell. I think the best idea is to use a hashing that is efficient and also "collision free" (it means, the hashing is safer to use). One algorithm that fit both purposes is BLAKE2.