:: 28-Feb-2001 20:48 (Wednesday) ::
Nifty: http://www.ratajik.com/COWPump/
distributed.net staff keep (relatively) up-to-date logs of their activities in .plan files. These were traditionally available via finger, but we've put them on the web for easier consumption.
:: 28-Feb-2001 20:48 (Wednesday) ::
Nifty: http://www.ratajik.com/COWPump/
:: 28-Feb-2001 20:41 (Wednesday) ::
After Decibel’s plan from yesterday, there are some things that need
explanation. We’ve had a lot of complaints from DPC members that argued
that there was no organized megaflush going on and that the accusation
made by the DCTI staff yesterday was premature and partly false.
What seemed to be the cause for the backlog was individual team members
who saved up some blocks flushed them because they feared their 8012 blocks
would become invalid soon. A valid reason to flush, one would say.
Unfortunately it turned out that this caused a lot of blocks flushed at
the same time, blocks that normally would have been distributed over days.
Some remarks on this practice: It’s _not_ good to save up blocks. Even
when you do it only on your own or ‘just for a few weeks’. Why?
Our master is optimized for processing keys from 1 or at most a couple of
subspaces. Every subspace takes up 32MB paged-in memory. A lower number
of paged-in subspaces means a faster master. Due to the optimizations in
the code, even switching between in-core spaces means a huge performance
hit. As soon as the master process has a backlog of a couple of million
blocks from 20 or so different subspaces, it chokes.
Some people justify their flushing by saying: ‘But hey, if you can’t cope
the load of a couple of million blocks more per day, how are you going to
handle this in the future when these loads are normal?’ I hope above
mentioned explanation will show this reasoning is false for once and for
all. We have a perfectly capable master box, its specs are still way
adequate, load testing shows that when blocks are somewhat from the same
subspaces, we can easily handle 1000Gkeys/s with the current hardware.
Normally, blocks from unopened spaces are kept in a separate queue which
is processed at quiet times. When suddenly a lot of blocks come from a
lot of unopened spaces, that theoretical 1000Gkey ceiling jumps down to
a something below our current rate, hence the backlog which is very hard
to get rid of.
With this explanation I hope to have convinced people not to save up too
many blocks. Daily stats are what they are for: Daily rankings. Not for
getting a #1 spot for 1 day because you’ve saved up more or longer than
your friends. If you really want to be #1, just recruit more computers!
We try to suit as many participants as possible and we are very pleased
with all the enthusiasm distributed.net participants show. But if people
keep doing things against the policies set out by distributed.net staff,
we might have to take measures against it, by blocking people from stats
or changing the lifetime of a block. This is not because we suddenly
dislike those participants, but because we want the contests to be
satisfying for _all_ users. without backlogs, so everyone can have their
blocks tallied in time.
One more thing on backlogs: Backlogs don’t mean our system is broken, it
just means our system is handling the load sort of well. We do accept all
blocks, and never give a connection refused on our proxies. We _will_
process all those blocks in the end. Maybe not today, but eventually. So
in the end each and every blocks you flush will be counted. Your stats
total will reflect your total work done. And if we all take care in
flushing to the system as it was designed, backlogs will be kept to a
minimum and daily stats will be correct, too.
Keep on crunching!
:: 27-Feb-2001 23:31 (Tuesday) ::
The master is back-logged right now, and has been for quite some time.
(http://n0cgi.distributed.net/rc5-proxyinfo.html) This is due to a
‘MegaFlush’ that the Dutch Power Cows are doing, despite our requests that
they discontinue this practice.
The effect of this backlog is that daily stats will be off for the next
several days. Everyone’s stats will appear low for today, and will slowly
return to normal over the next few days.
Outbound work will still be available. Once the backlog has been processed
by the master, RC5 stats will return to normal. OGR stats should be
un-affected. Expect the occurance of dupes to increase while this backlog
exists as well.
:: 25-Feb-2001 21:09 (Sunday) ::
We had to back rev the latest set of clients. V2.8012.466 contains a bug
that will produce bad results. Please downgrade to clients in the v2.8010.*
family. All v2.8012 clients have been removed from the download page
(http://www.distributed.net/download/clients.html) At this time we are
not rejecting results from v2.8012, but in the near future we will be.
We will pre-release fixed clients ASAP, but once again for the time being
please downgrade to v2.8010 or before.
Sorry for any inconvenience this has caused.
Paul
:: 11-Feb-2001 16:21 (Sunday) ::
Before my in-box gets flooded… :)
Master was down for about 4 hours at the end of Feb 10, so everyone’s
stats for Feb 10 will be a little low. The back-logged work will be in
the run for Feb 11.
:: 07-Feb-2001 16:59 (Wednesday) ::
If you’ve got a few minutes to kill today, take a look at an interesting
exercise in steganography. http://www.spammimic.com/ can hide a short
message by making it look like SPAM.
:: 02-Feb-2001 16:26 (Friday) ::
The rc5 run aborted last night, and at this point it’s really too late to
start it for today, so I’m just going to leave it for tonight. That means
that there will be two rc5 runs tonight instead of the usual one.
Sorry for the inconvenience.