Page 1 of 1

Using find to make things fast

Posted: Tue Apr 08, 2008 1:33 am
by lightweight
As 2x2GB RAM prices have fallen I've been messing around with the idea of mounting different dirs to ramdisk then writing back to disk at shutdown to cut/eliminate disk access time after initial boot, as well as loading Linux to ramdisk because guys like this have the right idea: http://thumper.fastcoder.net/wiki/Runni ... _a_Ramdisk , and it was pointed out to me that I'm missing the obvious: instead one could just load, say, /lib into the buffer with a

Code: Select all

find /lib
(or / and exclude .mp3/etc, or whatever)
and experiment to their heart's delight with speed and battery life, which is way more convenient and practical. And one can control writes back to disk and shutdown already does a sanity sync anyway. Cool.

Posted: Tue Apr 08, 2008 5:06 am
by aaa
But it's very easy to nuke the disk cache, just by writing or reading a huge file...
Plus, a whole folder like /lib isn't all used, only small parts of it. So that makes both methods cumbersome. The ramdisk way is fastest though, because sequentially loading one big file is way faster than randomly loading each individual file one by one.

BTW, there's a tool called 'preload' that's supposed to help with this. However, I haven't seen much benefit... maybe I just configured it wrong or something (I just installed it and expected it to work, lol).