Author Topic: Try servers one at a time  (Read 3048 times)

Offline tvholic

  • Contributor
  • ***
  • Posts: 87
Try servers one at a time
« on: February 08, 2009, 08:20:40 pm »
Let's say I have a free server A and a pay server B, and I'd like Alt.Binz to try getting as many articles as possible from A before trying B.  If I set up A as a primary and B as its backup, Alt.Binz is constantly connecting and disconnecting from both servers as it tries getting each article in sequence.  Not only is this inefficient in terms of overhead, but I start having problems with the servers thinking I have too many active connections because they aren't managing the disconnects properly.  It works much better to make two passes, by setting the retry delay to a very high value, making A and B each a primary server with no backup, and making one pass with only A enabled, and then closing and re-starting Alt.binz and making a second pass with only B enabled.  It would be great if there were some way to automate this process.

Offline DM8Mydog

  • Contributor
  • ***
  • Posts: 48
Re: Try servers one at a time
« Reply #1 on: February 08, 2009, 10:26:51 pm »
and on that bombshell, whats the request? ...

 I am *guessing* you want an option to keepalive (or increase the keepalive) on the 'backup' server, is that correct?

Offline tvholic

  • Contributor
  • ***
  • Posts: 87
Re: Try servers one at a time
« Reply #2 on: February 08, 2009, 10:44:47 pm »
Uh, the request was for an option to try the servers one at a time when processing the queue.  Call it a server-based mode (connect to the primary server and try to get all the articles, then connect to the first backup server and try to get all the remaining articles, etc.), as opposed to the current article-based mode.

Offline Hecks

  • Contributor
  • ***
  • Posts: 2011
  • naughty cop
Re: Try servers one at a time
« Reply #3 on: February 08, 2009, 10:53:01 pm »
I see, seems clear enough.  The only disadvantage might be having lots of undecoded parts gradually accumulating, and a big decode spike at the end of the run.

I think DM8Mydog is suggesting the alternative of keeping both the backup & primary servers connected once the first retry is triggered, until the collection is complete?


Offline tvholic

  • Contributor
  • ***
  • Posts: 87
Re: Try servers one at a time
« Reply #4 on: February 08, 2009, 11:27:56 pm »
Ah, yes, keeping the server connections open would be a perfectly fine alternative.  Ie, instead of reusing my four connections to server A, open an additional four connections to server B (and presumably four more to server C ...), and don't disconnect any of them until the end.  But I think that method might be more effort to implement programmatically, having to keep track of the connection synchronizations.  Also, if someone had server B configured as a backup for two active primary servers, that would require opening eight concurrent connections to B, which might not be possible.

The main problem I'm trying to deal with is the constant connections and disconnections, which after a while cause the servers to get confused and start refusing connections entirely ("too many connections from this host").

Offline Spad

  • Contributor
  • ***
  • Posts: 9
Re: Try servers one at a time
« Reply #5 on: May 13, 2009, 04:09:32 am »
I think I can clarify this as I have the same "issue" as the OP.

I have an ISP newserver and a pay one. The ISP only has 5-7 days retention, so older stuff comes from the pay server. I have the ISP server as a primary and the pay as a backup.

Imagine a have a 4Gb download with say 50 parts, none of which are present on my ISP server. For every single part of every single file, Altbinz will connect to my ISP server, check for the header, when it doesn't find it it will disconnect and move on to checking the pay server. This results in a *lot* of unneeded connections.

As an alternative, have an option whereby for each "file group" download Altbinz will connect to each available server in sequence, check for *all* of the headers of all the parts & files in that group and then cache which ones are present on each server so that it knows in advance which server to connect to for each part.

1 extra connection per server (maximum) upfront means potentially thousands fewer connections over the whole download.

Offline MrbLOB9000

  • Contributor
  • ***
  • Posts: 43
Re: Try servers one at a time
« Reply #6 on: May 13, 2009, 08:01:16 am »
seconded, I couldn't find this post as already posted so I posted a similar thread the other day https://www.altbinz.net/forum/index.php?topic=2925.0

Offline Hecks

  • Contributor
  • ***
  • Posts: 2011
  • naughty cop
Re: Try servers one at a time
« Reply #7 on: May 13, 2009, 09:11:00 am »
Personally, I think that if you're regularly facing a need for thousands of retries on your (pay) backup server, then you need to rethink your server setup.  With Astraweb @ $11 p/m for unlimited, 20 connections with SSL, there's little reason to depend on any crappy ISP server as primary, IMHO.

As for validating headers before starting downloads, there's an open request for that here:

https://www.altbinz.net/forum/index.php?topic=2367.0

« Last Edit: May 13, 2009, 09:12:56 am by Hecks »

Offline MrbLOB9000

  • Contributor
  • ***
  • Posts: 43
Re: Try servers one at a time
« Reply #8 on: May 13, 2009, 09:46:28 am »
astra as my primary and then a crappy one as my backup is my current setup, most of the time astra is great but some times on certain nzbs it has to hit the backup server quite a bit and that's when I have problems.

Offline Spad

  • Contributor
  • ***
  • Posts: 9
Re: Try servers one at a time
« Reply #9 on: May 13, 2009, 06:09:33 pm »
Personally, I think that if you're regularly facing a need for thousands of retries on your (pay) backup server, then you need to rethink your server setup.

90% of the stuff I download is within the retention limit for my ISP servers. They're faster than the pay servers because they're on the ISP's local network, they're "free" and they don't have any download caps. But occasionally, on a whim, I'll want to download something that's either older than a couple or weeks or from a group my ISP doesn't carry and if it's a large download then doubling the number of connections made just seems silly.

This was especially true a few months ago when my ISP servers were having problems that meant they were hanging connections for ~60 seconds if you disconnected and reconnected too quickly. I agree it's hardly the end of the world, but it would be a useful feature if it could easily be coupled into your Header check request, simply from the point of view of efficiency.