lets hope this will be the last discussion of this misleading issue!
THE BIG PAGE FILE QUESTION!
THE BIG PAGE FILE QUESTION!
I have read so much about how big your page file size should be that i think my head is gonna explode. Does anyone actually know what they are talking about when it comes to this subject? Please let your thoughts be known as i trust users on this forum more than i do on any other + i know there are loads of people that would benefit from this discussion. To get the ball rolling, hows about we discuss whether or not the file should be static as opposed to dynamic? My system has 512mb ram, i have set my pagefile to a static level of 384mb on recommendation from a website, this seems to work just fine, any views???
lets hope this will be the last discussion of this misleading issue!
lets hope this will be the last discussion of this misleading issue!
T42 Dothan 725 1.6ghz, 1gb ram, 40gb hd, 7500 32 mb ATI Mobility.
-
monty cantsin
- Junior Member

- Posts: 280
- Joined: Wed Jun 30, 2004 4:27 am
Re: THE BIG PAGE FILE QUESTION!
So what's your problem then?SimonCC wrote:this seems to work just fine,
Okay... The standard optimal option for those who don't disable the page file like me, is to set it at a CUSTOM size, not System Managed. System managed page files are constanty resized by Windows, which tends to cause page file fragmantation, and leads to worse performence because of that. The minimum should be 1.5 times your physical memory (so 768MB) and the maximum should be 2.5 times your physical memory (so 1280MB). This is the most common setting when optimizing your page file.
The reason I disable my page file is so that Windows writes everything to the RAM, which is faster than your hard drive. This isn't completely safe with 512MB of RAM, as you may find yourself running out of memory with large applications, and most big games.
Setting your page file smaller isn't useful, because that does not make windows use more of your RAM (it's just not smart enough). If you stick to the general 1.5-2.5x physical RAM rule, you will be safe with every machine.
The reason I disable my page file is so that Windows writes everything to the RAM, which is faster than your hard drive. This isn't completely safe with 512MB of RAM, as you may find yourself running out of memory with large applications, and most big games.
Setting your page file smaller isn't useful, because that does not make windows use more of your RAM (it's just not smart enough). If you stick to the general 1.5-2.5x physical RAM rule, you will be safe with every machine.
T61p 6460-67U.
Today seems to be "the day of the pagefile". 
Read the links in my post here. They mostly go back to a guy named DriverGuru -- he writes and debugs Windows drivers for a living and really knows his stuff when it comes to the guts of an NT-based OS. Search around ArsTechnica for anything posted by him - chances are it will be incredibly informative.

See, you don't understand pagefiles. When a program wants memory, it makes a call to allocate memory. If it is an active program, and there is RAM available, it will use RAM. When the program becomes inactive, it will get paged to disk to make room for other apps.
Now, "paged to disk" does not mean, "windows copies the entire contents of RAM to the swapfile." (This is the most common misconception about paging... and it's totally incorrect.) When windows pages out an app, various pieces of the app are simply erased from RAM entirely. This would be the primary code and any DLLs that were referenced only from that program - the VM system dumps them. When/if the program gets paged back in, the OS will simply re-read the exe and dlls from their original location on the HDD, i.e. where you installed them to (c:\program files\wherever). The only stuff that gets copied from RAM to the pagefile is any data the program happened to be working with. With Word for instance, this would be any .doc files you have open.
Say you switch back to Word after it has been paged out. What happens? The OS reloads the code from the DLLs and exe as it needs it, and it reads the "variable data" from the pagefile.
Think for a minute about defragmentation. Generally, the purpose is to minimize seeking on a disk - the more the actuator has to move, the longer it takes to read the drive. However, in the case of paging-in an app, the actuator is moving no matter what -- you have to page in the code from various locations on the disk as well as paging in the data from the pagefile. In many instances, the data contained in the pagefile is actually SMALLER than the code you are loading. The only way to speed this procedure up would be if you could defragment the drive in such a way that a program plus all of its required dlls were "in order".
It's in your best interest to make the minimum "as large as you think you'll ever need" - this keeps it all in a single extent ("fragment") on the disk. Again, note that having a large minimum does NOT make it "swap hungry." It won't swap more just because the space is there. Giving it a larger maximum size allows the OS to grow the pagefile on the fly if it needs to, to fulfill a request for some huge amount of memory. Note that when the OS resizes the pagefile, it does NOT delete the one that is there and write a new one (in multiple fragments). It simply adds another extent in whatever free space it needs, temporarily. When you reboot the machine, any extents that have been created above the minimum PF size will be erased, and if your original "minimum size" pagefile was in one extent ("fully defragmented") then it will continue to be that way after you reboot the machine.
Your understanding of the pagefile and Windows' VM system is very flawed, but your recommendation is about the same one I give. Make the minimum something sane - if possible, as large as you think you'll ever need for program usage, but larger can't hurt. 1.5x RAM is a good place to start, but by no means a hard and fast rule. Your smartest bet is to make the maximum something MASSIVE, even if it is like 90% of your free space. Windows will always clear it out again after you reboot, and by having that much "potential swap" you ensure that you will not get out of memory errors again. I don't forsee myself ever having that much memory in use (I don't use Photoshop though...). I've got 1G of ram, and my pagefile settings are 1536 MB/2048 MB.
Read the links in my post here. They mostly go back to a guy named DriverGuru -- he writes and debugs Windows drivers for a living and really knows his stuff when it comes to the guts of an NT-based OS. Search around ArsTechnica for anything posted by him - chances are it will be incredibly informative.
This is bullcrap. Page file fragmentation is NOT the horror of horrors that Executive Software (makers of Diskkeeper) want you to think it is.none wrote:Okay... The standard optimal option for those who don't disable the page file like me, is to set it at a CUSTOM size, not System Managed. System managed page files are constanty resized by Windows, which tends to cause page file fragmantation, and leads to worse performence because of that.
The reason I disable my page file is so that Windows writes everything to the RAM
See, you don't understand pagefiles. When a program wants memory, it makes a call to allocate memory. If it is an active program, and there is RAM available, it will use RAM. When the program becomes inactive, it will get paged to disk to make room for other apps.
Now, "paged to disk" does not mean, "windows copies the entire contents of RAM to the swapfile." (This is the most common misconception about paging... and it's totally incorrect.) When windows pages out an app, various pieces of the app are simply erased from RAM entirely. This would be the primary code and any DLLs that were referenced only from that program - the VM system dumps them. When/if the program gets paged back in, the OS will simply re-read the exe and dlls from their original location on the HDD, i.e. where you installed them to (c:\program files\wherever). The only stuff that gets copied from RAM to the pagefile is any data the program happened to be working with. With Word for instance, this would be any .doc files you have open.
Say you switch back to Word after it has been paged out. What happens? The OS reloads the code from the DLLs and exe as it needs it, and it reads the "variable data" from the pagefile.
Think for a minute about defragmentation. Generally, the purpose is to minimize seeking on a disk - the more the actuator has to move, the longer it takes to read the drive. However, in the case of paging-in an app, the actuator is moving no matter what -- you have to page in the code from various locations on the disk as well as paging in the data from the pagefile. In many instances, the data contained in the pagefile is actually SMALLER than the code you are loading. The only way to speed this procedure up would be if you could defragment the drive in such a way that a program plus all of its required dlls were "in order".
Setting your page file smaller isn't useful because any time it is THERE, windows will use it (as it rightly should, and you should want it to). If you don't have one, you run the risk of getting "out of memory" errors when an app tries to allocate more RAM. It will ask for a chunk, and rather than paging-out unused DLLs or data, the machine will simply refuse to allocate the memory and give you an error instead. Note that setting the file to a small size does NOT make windows "less likely" to page -- it just means that when it does decide to page, it will get [censored] off and run out of space sooner.Setting your page file smaller isn't useful, because that does not make windows use more of your RAM (it's just not smart enough).
It's in your best interest to make the minimum "as large as you think you'll ever need" - this keeps it all in a single extent ("fragment") on the disk. Again, note that having a large minimum does NOT make it "swap hungry." It won't swap more just because the space is there. Giving it a larger maximum size allows the OS to grow the pagefile on the fly if it needs to, to fulfill a request for some huge amount of memory. Note that when the OS resizes the pagefile, it does NOT delete the one that is there and write a new one (in multiple fragments). It simply adds another extent in whatever free space it needs, temporarily. When you reboot the machine, any extents that have been created above the minimum PF size will be erased, and if your original "minimum size" pagefile was in one extent ("fully defragmented") then it will continue to be that way after you reboot the machine.
Your understanding of the pagefile and Windows' VM system is very flawed, but your recommendation is about the same one I give. Make the minimum something sane - if possible, as large as you think you'll ever need for program usage, but larger can't hurt. 1.5x RAM is a good place to start, but by no means a hard and fast rule. Your smartest bet is to make the maximum something MASSIVE, even if it is like 90% of your free space. Windows will always clear it out again after you reboot, and by having that much "potential swap" you ensure that you will not get out of memory errors again. I don't forsee myself ever having that much memory in use (I don't use Photoshop though...). I've got 1G of ram, and my pagefile settings are 1536 MB/2048 MB.
Last edited by ZPrime on Tue Nov 16, 2004 5:02 pm, edited 4 times in total.
New Biz: 4062-27U - W500 C2D T9600, 15.4" 1920x1200 (FireGL V5700), 160G 7200rpm, 4G PC3-8500, DVDRW, Intel 5100, BT, TurboMem, T60p KBD 
Old Biz: 2613-CTO - T60p Core 2 T7200, 14" 1400x1050 (FireGL V5250), 100G 7200rpm, 3G PC2 5300, DVDRW, Intel a/g , BT
Old Biz: 2613-CTO - T60p Core 2 T7200, 14" 1400x1050 (FireGL V5250), 100G 7200rpm, 3G PC2 5300, DVDRW, Intel a/g , BT
Thanks none. I have survived for several years using a swap file that was set for 256Mb min and the value of memory max (768Mb right now). I was always able to run multiple virtual machines (VMware chews up memory at a vast rate). With SP2 on my T41, the swap file won't set to anything less than 1150Mb (System managed size = 1.5 x 768). I tried setting it to zero, but that caused some limitations in virtual machine capacity.
I like your explanation, and I am using a paging file, but I lost 1Gb of space on my hard drive in the process for nothing. With Microsoft, I don't think it is that the swap file can't work more effectively, I think rather it is that they could care less - arrogant fatheads that they are.
.... JDHurst
I like your explanation, and I am using a paging file, but I lost 1Gb of space on my hard drive in the process for nothing. With Microsoft, I don't think it is that the swap file can't work more effectively, I think rather it is that they could care less - arrogant fatheads that they are.
.... JDHurst
I just wanted to cut throught the crap, you see i have read that you should have the same initial and max file size to prevent thrashing and fragmentation of the page file which is why i have it set at 384mb. I have had no problems and my total comit charge does not even get close to my total physical memory even when on all day. I have no real emergency problem here but i just want to know why there is such a discrepancy bettween these theories, for now ill stick with my static setting. As for 'well you might get low memory warnings' - so what!! Then increase your page file, it is worth perfecting this throught trail and error and if you are lucky enough to have a massive RAM, all the more reason to look into disbanding of the messy page file system.
T42 Dothan 725 1.6ghz, 1gb ram, 40gb hd, 7500 32 mb ATI Mobility.
I would also like to add that there is a command line that can be implemented in system.ini so that page files are used in 'conservative' mode, meaning that page files are used ONLY if your RAM memory card is full and so needs the virtual memory, only problem is, i cant find the website that i saw this command line on! anyone know it???
T42 Dothan 725 1.6ghz, 1gb ram, 40gb hd, 7500 32 mb ATI Mobility.
Yes - it is true. So many opinions from folks that don't understand paging, swapping, and virtual memory.
One of my titles is "kernel engineer". ZPrime essentially has it right. I don't know where the other information comes from
Memory is managed in small chunks, known as the page size. "paging" is when these small chunks are written to disk to free up memory. A "page" is the smallest piece of memory that may be allocated and "paging" is done in multiples of these - not entire applications.
When an application allocates memory (either implicitly or explicitly) it is either read-only or writeable (this is an over generalization but suffices). read-only memory might be the executable code of an application. Writeable memory is for data that the program can change. Even when you select "no pagefile", paging-in (reading from disk) still occurrs for executable portions of applications and DLLs.
A swapfile is "backing store". It is permanent storage the system may use to save data from real memory. Data is only "forced" to be written to disk when there is not enough real memory to satisfy all the running applications (this isn't entirely true - when the system is idle preventative writes may be done to improve performance overall). Only enough pages are written to disk to free up enough real memory needed by another application.
Running a system without a swapfile (pagefile) does not improve performance. It will, however, result in out-of-memory errors faster.
There are differing descriptions for "optimal performance". Given suitable disk space you should set a custom pagefile size with equal minimum and maximum values. The actual size varies based on your usage but a general rule of twice your real memory (IE: pagefile of 1G for 512M memory) is one rule-of-thumb. Using a very large pagefile does not hurt performance, but does waste disk space.
If you are really concerned over performence, to increase minimum pagefile size, or to change from system-managed to custom you should first select no pagefile, defrag, then set the custom size appropriately. This will help ensure contiguous allocation once which could improve performance slightly (though probably not noticebly unless very large chunks of memory are allocated at once). Fragmented pagefiles are not the evil they are purported to be but if it makes you happy this is how you do it.
One of my titles is "kernel engineer". ZPrime essentially has it right. I don't know where the other information comes from
Memory is managed in small chunks, known as the page size. "paging" is when these small chunks are written to disk to free up memory. A "page" is the smallest piece of memory that may be allocated and "paging" is done in multiples of these - not entire applications.
When an application allocates memory (either implicitly or explicitly) it is either read-only or writeable (this is an over generalization but suffices). read-only memory might be the executable code of an application. Writeable memory is for data that the program can change. Even when you select "no pagefile", paging-in (reading from disk) still occurrs for executable portions of applications and DLLs.
A swapfile is "backing store". It is permanent storage the system may use to save data from real memory. Data is only "forced" to be written to disk when there is not enough real memory to satisfy all the running applications (this isn't entirely true - when the system is idle preventative writes may be done to improve performance overall). Only enough pages are written to disk to free up enough real memory needed by another application.
Running a system without a swapfile (pagefile) does not improve performance. It will, however, result in out-of-memory errors faster.
There are differing descriptions for "optimal performance". Given suitable disk space you should set a custom pagefile size with equal minimum and maximum values. The actual size varies based on your usage but a general rule of twice your real memory (IE: pagefile of 1G for 512M memory) is one rule-of-thumb. Using a very large pagefile does not hurt performance, but does waste disk space.
If you are really concerned over performence, to increase minimum pagefile size, or to change from system-managed to custom you should first select no pagefile, defrag, then set the custom size appropriately. This will help ensure contiguous allocation once which could improve performance slightly (though probably not noticebly unless very large chunks of memory are allocated at once). Fragmented pagefiles are not the evil they are purported to be but if it makes you happy this is how you do it.
-
monty cantsin
- Junior Member

- Posts: 280
- Joined: Wed Jun 30, 2004 4:27 am
-
- Similar Topics
- Replies
- Views
- Last post
-
-
GIVEN AWAY: A big box of parts from T6x/R6x projects
by wujstefan » Tue Feb 07, 2017 5:04 pm » in Marketplace - Forum Members only - 7 Replies
- 816 Views
-
Last post by fefrie
Wed Mar 01, 2017 8:20 pm
-
-
-
Hard Drive won't boot and external USB Hard Drive enclosure/caddy/adapter for file retrieval
by E350 » Thu Apr 06, 2017 11:38 am » in ThinkPad T6x Series - 5 Replies
- 926 Views
-
Last post by axur-delmeria
Thu Apr 06, 2017 9:43 pm
-
-
-
770X Aftermarket Battery? (*And quick PIII linux question)
by Choram » Wed Jan 04, 2017 6:52 am » in ThinkPad Legacy Hardware - 1 Replies
- 928 Views
-
Last post by Dekks
Thu Jan 05, 2017 12:43 am
-
-
-
Question about an X301
by mazzinia » Tue Jan 10, 2017 9:19 am » in ThinkPad X200/201/220 and X300/301 Series - 28 Replies
- 2944 Views
-
Last post by Temetka
Sun Mar 05, 2017 9:38 pm
-
Who is online
Users browsing this forum: No registered users and 5 guests




