Well the flaw in that argument is if the article does not exist on the primary server it cannot be downloaded from there anyway but the download should continue at whatever speed your primaries can handle until all parts available are downloaded then the backups come into play, the outcome reguarding speed would be the same in my opion. my main point in this discusion is that the current system works but not without juggling with servers I would like to see all servers in the list operate indepently and do not keep checking for articles that do not exist, by whatever method is deemed best by the programer
for what its worth my thoughts on the subject are, servers should be grouped in bands of priority, numbers or titles 1,2,3,4,5 highest,high,mid,low,lowest, the highest group would start downloading, if a valid error code was recieved by servers in the group then the next lower group would start and so on, if you had 7 servers with
30 conections all could be in use at some time, each group would continue to the end of the queue. each server would need to know which articles had been tried
and which had not. so no parts were missed.
Only when all servers had reached the end of the queue whould the retry count come into play, this should default to 1 or 2 and not zero and could still be set by the operator, once the retry count was exceded the articles whould be saved as they are now and attempts to repair them made with the pars. as outlined no servers would be setting waiting for the retry clock to keep pausing servers. And should maximize downloads.