Yes, the files were decoded within seconds. I also made tests and just downloaded one single file (and also just with one single connection too) so the memory cache would not have parts of another file and is not flooded with parts but with the same result of file corruption.
I also thought it might be because of too many connections so Alt.Binz would flooded with data, but even with only one connection and downloading article by article without any error messages the corrupt files ended up corrupt again.
What I meant with "other half were corrupt":
The RAR archive has ~320 parts like
file_name.partxxx.rar, where .part001.rar till .part140.rar were ok, but all files from .part141.rar to .part320.rar had one missing block except nine or ten files.
The same files downloaded without the memory cache were fine. I tested it with different usenet providers. Did not matter, as long as memory cache was in use (or Alt.Binz was not restarted), the downloaded files that were corrupt before ended up corrupt again.
It looks like that Alt.Binz struggle over some bit musters files contain (or some sort of "memory lock") and keep specific parts of a file "locked" in the memory what then would cause that the downloaded files end up corrupt all over again, because it use the corrupt parts from the "locked" memory cache for that file to puzzle the parts to a complete file. With a restart of Alt.Binz the memory cache is cleared and re-initiated so when downloading the corrupt files again, Alt.Binz won't have the corrupt memory locks/blocks and re-downloaded files (that were corrupt before) end up fine.
Could also a problem caused by the compiler you are using and not Alt.Binz itself. I don't know. I just try to help to identify the cause of the issue so it can be fixed.
Is there something hidden in Alt.Binz that I can enable so Alt.Binz would log something you can have a look at to find out what Alt.Binz was exactly is doing when a downloaded file ended up as corrupt? Because the verbose log just say "article downloaded fine" even the file end up as corrupt. But re-download the same articles without memory cache end up with an intact file.