Can NTFS compression speed up slow notebook HDDs ?
-
Puppy
- Senior ThinkPadder

- Posts: 2261
- Joined: Sat Oct 30, 2004 4:52 am
- Location: Prague, Czech Republic
Can NTFS compression speed up slow notebook HDDs ?
Do you have experience with NTFS compression regarding speeding up loading large files. Typically virtual disk files of virtual machines like Microsoft Virtual PC or VMware ? My configuration is 4200 rpm HDD and 1.6 GHz Dothan.
-
Bob Collins
- Junior Member

- Posts: 279
- Joined: Sun Apr 25, 2004 2:16 pm
- Location: Palm Beach Gardens, FL
As I understand your question, you ask about speeding up through compression, in particular NTFS compression.
Simply, no. The compression will help the drive be faster after a good defrag, because there will be less stuff to look through to find files. Now the catch is that when you go to use a compressed file, it must be decompressed on the fly which is of course slower than reading the same file as if it had not been compressed.
The thing with NTFS compression is that it is transparent for the user. It does free up HD space, but the cost is actually what you are after and that is speed.
Your best bet if you do not already do it is to defrag the drive every week. Fragmented files will hurt system performance quite tremendously. The internal defrag is a stripped down defrag from Executive software (assuming you are running some form of Windows NT, 2000, XP) and as such does not allow scheduled defragging. If you get the full version, it includes the scheduler. I have that at the office and I have my systems defrag nightly. They always run smoothly.
Your best bet for disk speed will be a faster RPM drive, like the Hitachi 7,200 rpm jobs. I noticed a nice increase in my T22 merely by going to a 5,400 rom drive from the 4,200 rpm original drive.
Simply, no. The compression will help the drive be faster after a good defrag, because there will be less stuff to look through to find files. Now the catch is that when you go to use a compressed file, it must be decompressed on the fly which is of course slower than reading the same file as if it had not been compressed.
The thing with NTFS compression is that it is transparent for the user. It does free up HD space, but the cost is actually what you are after and that is speed.
Your best bet if you do not already do it is to defrag the drive every week. Fragmented files will hurt system performance quite tremendously. The internal defrag is a stripped down defrag from Executive software (assuming you are running some form of Windows NT, 2000, XP) and as such does not allow scheduled defragging. If you get the full version, it includes the scheduler. I have that at the office and I have my systems defrag nightly. They always run smoothly.
Your best bet for disk speed will be a faster RPM drive, like the Hitachi 7,200 rpm jobs. I noticed a nice increase in my T22 merely by going to a 5,400 rom drive from the 4,200 rpm original drive.
Bob
701C, 600X, T22, G4 Powerbook
701C, 600X, T22, G4 Powerbook
-
JonathanGennick
- Junior Member

- Posts: 302
- Joined: Fri Feb 25, 2005 1:03 pm
- Location: Munising, MI, USA
- Contact:
The idea of using compression to boost performance of an I/O bound system does have merit. I know at least one database administrator who picked up a performance boost on some (not all) I/O bound queries by using Oracle's table compression. In his case, the gain from having to read fewer blocks from disk far outweighed the overhead of decompression.
But database management systems do a lot more I/O than we do on our Thinkpads. I tend to doubt that compression would improve performance for reading/writing small files such as applications and such. For large files? The only real way to find out would be to just try it both ways and see whether you could measure an improved performance for those tasks that most concern you.
But database management systems do a lot more I/O than we do on our Thinkpads. I tend to doubt that compression would improve performance for reading/writing small files such as applications and such. For large files? The only real way to find out would be to just try it both ways and see whether you could measure an improved performance for those tasks that most concern you.
-
Bob Collins
- Junior Member

- Posts: 279
- Joined: Sun Apr 25, 2004 2:16 pm
- Location: Palm Beach Gardens, FL
Good point on the DB tools, however I would think that the table compression is quite different than NTFS compression. As I understand it, NTFS compression is similar to zipping and as such would require decompression on read, and then compression on write if changes were made to the file.
As noted, on a Thinkpad, or really any "normal" use system will not see any benefit from this compression.
I would be interested if you tried it and ran some quantifiable tests.
As noted, on a Thinkpad, or really any "normal" use system will not see any benefit from this compression.
I would be interested if you tried it and ran some quantifiable tests.
Bob
701C, 600X, T22, G4 Powerbook
701C, 600X, T22, G4 Powerbook
-
JonathanGennick
- Junior Member

- Posts: 302
- Joined: Fri Feb 25, 2005 1:03 pm
- Location: Munising, MI, USA
- Contact:
Bob, I forget the compression algorithm that Oracle uses. I believe that rows are compressed when written to a block and uncompressed when read. (note that being written to a block doesn't necessarily translate into disk i/o, because blocks get bufferred). I should look it up sometime and refresh my memory.
But in the end, I agree with you. I'm skeptical that compression would translate into a significant performance improvement for the typical sort of I/O one does on a laptop.
If I had the time though, I'd try it on some large files. It'd be an interesting experiment to make. But at the moment I just don't have the time.
As my colleagues and I often say when discussion things Oracle, one good test is worth a 1000 conjectures
But in the end, I agree with you. I'm skeptical that compression would translate into a significant performance improvement for the typical sort of I/O one does on a laptop.
If I had the time though, I'd try it on some large files. It'd be an interesting experiment to make. But at the moment I just don't have the time.
As my colleagues and I often say when discussion things Oracle, one good test is worth a 1000 conjectures
-
- Similar Topics
- Replies
- Views
- Last post
-
- 0 Replies
- 448 Views
-
Last post by dandreye
Tue Jan 24, 2017 10:22 am
-
- 6 Replies
- 281 Views
-
Last post by Raidriar
Tue Jun 13, 2017 11:52 am
-
-
Fan trouble - 3000 N200 + Type 0769 - ESG ACPI Script for NHC > NOTEBOOK HARDWARE CONTROL
by MatMor » Sun Mar 12, 2017 12:12 pm » in 3000 Series Laptops - 0 Replies
- 2613 Views
-
Last post by MatMor
Sun Mar 12, 2017 12:12 pm
-
-
-
Prototype notebook with 3:2 LCD
by thinkpadcollection » Sun Jun 04, 2017 4:32 pm » in Off-Topic Stuff - 8 Replies
- 314 Views
-
Last post by RealBlackStuff
Mon Jun 05, 2017 6:49 am
-
Who is online
Users browsing this forum: No registered users and 4 guests



