Well, this is more on the theoretical side of it; but I was, at the time, thinking of a Beowulf Cluster and ran across a farily simple description of Amdahl's Law (Who just so happened to be working at IBM

)
http://www.phy.duke.edu/~rgb/Beowulf/be ... ode21.html
To me, this would only make sense if, for every task to be done on the computer and noticed by the user as work, a Pentium M would have a certain threshold for how much can be done in a given time and still not be noticed by the user or termed as work; much like the idea behind Hyperthreading. If, indeed, a user had a low enough threshold for things like time-to-completion for a given task and the OS handled everything in a serial manner then it would appear as if Dual Core would have an advantage over Single Core. So, it ends up depending on the user. This also assumes that the user puts a very high value on things like speed of the program; some progressive tasks would then be the worst-case scenario.
Yet that means, say if the process had was forced through a serial procedure call; the fastest you could ever get it is the time through that serial portion. So on a 1.86GHz Yonah, you may get better performance, defined as high values of R and T, on 95% of all the tasks and still be the same or quite possibly slower than, say, a 2.0GHz Dothan on the remaining 5%. I think if you could consider web browsing/email/word processing/excel/powerpoint as high level tasks, then you would only get a tad more performance out of the current Yonahs. Unless you are doing very large amounts of work (going to the compute limits of excel), you may only realize a 25% gain in speed. Stuff like Word Processing will not benefit much, if at all, since that is limited by the user which can be assumed as near-serial (Try typing out several essays at the same time); further, the only advantage email and web browsing will gain is in terms of pure compute speed, then they get limited by the connection you are on and bandwidth becomes more important.
Going parallel would greatly help only a certain part of the user-base as much as would generally be thought. So the question just becomes how much the user doing non-intensive compute/business/research tasks would benefit from dual core. Assuming that OSes can multitask perfectly, and the code was near perfect when it came to multi-tasking (that is a far reach, I assure you), then you only gain by getting a larger increment,per chipset model release and generation, of application speed, again assuming memory doesn't limit it; those are quite a few assumptions, full OS support, full application support, etc.
So it really gives you the most tangible results if you use your system to the maximum and have tasks that could benefit from a dual core (Video, still limited by parts of the Video Card, Rendering, again limited by the Video Chipset) and already have the maximum values for every other resource. That is also why I would wait for the "Performance" models of the T60 and X60 come out; it makes little sense to me to pay that much for a dual core whilst the tasks I am thinking of already tax the system to the maximum when it comes to other compute resources.
