:: 13-Jul-2000 01:00 (Thursday) ::
Greetings, loyal cows!
If you’ve read bovine’s .plan
(http://n0cgi.distributed.net/cgi/planarc.cgi?user=bovine&plan=2000-07-06.03:43),
you know we’re experimenting with a larger minimum block size to see if
it will improve our network performance. Our tests so far are encouraging
(about a 30% reduction in traffic, I believe), and we’re discussing the
pros and cons of applying this change more widely, as well as some alternate
approaches.
In the mean time, I thought I’d point out that using larger blocks not
only reduces network traffic (yours and ours) but will boost your key
rate, especially on faster machines. The network and buffer management
components of the client take time away from the cores, just like every
other application on your computer. If you fetch/flush after every block,
you have 32 times less network overhead with a 2^33 as with a 2^28. It
takes just as long to submit a completed 2^33 as a 2^28, and it takes just
as long to connect and disconnect from the key servers, no matter how many
blocks are transferred in between. We recommend you fetch/flush only once
or twice a day. Your stats are based on the total work you submit, not
how often you submit it.
For example, a Mac G4 at 450MHz can do a 2^28 block in about 70 seconds.
If that machine was working only on 2^28 blocks, and updated after each
one, it would connect to our network over 1200 times a day. This example
goes beyond mere carelessness and borders on abuse.
Larger blocks also mean less data to transfer from keymaster to tally,
and less data for tally to, um, tally each day. Again, you still get the
same credit in stats whether you do 32 separate 2^28 blocks or 1 block of
2^33. The difference is in the bandwidth dcti uses, and the overhead on
your computer. The network bandwidth used by distributed.net (proxies,
keymaster, tally, website and ftp and e-mail lists like the .plan mailing)
is all donated – we’re guests. If we aren’t polite guests, we could be
asked to leave. Even if we don’t upset anyone, we’d like to make the most
of the bandwidth we have.
When we first started RC5-56, we decided on blocks of 2^28 keys because
the fastest personal computer money could buy was a Pentium 133. That
CPU could only do about 114,000 keys per second, which would finish a 2^28
block in about 40 minutes. Some older CPU’s may have a hard time completing
even a single 2^28 block in a 24 hour period. I’m sure it’s hard to feel
like any work is being done when it takes so long to see anything happen.
We certainly don’t want to alienate these participants by forcing them to
work on blocks so large it doesn’t finish for a month. Unfortunately,
that could eventually be the case if the people with faster machines don’t
voluntarily tune up their settings. As Moore’s Law
(http://www.tuxedo.org/~esr/jargon/html/entry/Moore’s-Law.html) continues
to push technology forward, it’s inevitable that we’ll need to raise the
ceiling beyond blocks of 2^33.
The other extreme, saving blocks up for weeks or months for a “mega-flush”
can also be harmful to the network. The sudden high volume of work
submitted all at once can saturate the proxies and the master. A big
enough mega-flush could theoretically take more than one day to be processed
into stats (please, don’t try!).
Though our biggest concern is the central network, these factors can also
improve performance between clients and personal proxies. I hope this
information will help you adjust your distributed.net clients in a way
that helps us all be more successful.
As always, thanks for all the cycles!