jiantbrane wrote:Am I right in thinking that even the less desirable of the SSD's will leave 7200-rpm HDD's in the dust, both in speed and reliability?
I wish I could give you a straight answer on that, I really do. Indeed, your question crystallizes why I started this thread. If you're talking about the current SSDs in Thinkpads, it looks like the answer is "Yes". I have no experience to back this up, but am going by marcc's and moore101's posts. On the other hand, if you're talking about SSDs in general then the answer is a resounding "No", bearing in mind that we're talking about the "less desirable" (often older) SSDs as you put it.
I'm piecing this together from several sources. For a start there is my personal experience with an original OCZ Core series drive, which is based on the original revision of the infamous J-Micron controller. Like the reviews say this will stall in certain situations, e.g. on small file writes or possibly when intermixing a lot of small reads and writes. In practice this meant that, while Windows boot time was improved, as soon as the anti-virus update hit, my machine would stall and become completely unresponsive for 10 seconds, including the mouse. Such simply does not happen with a conventional hard drive and is completely unacceptable in my book. I used that SSD only for light duty as the boot drive in my multimedia machine and can't comment what the experience would be like in a laptop, but am sure the stalling wouldn't be limited to anti-virus updates. Needless to say these don't cause the same kind of stalling with either a conventional disk or better SSD, both of which I've used. The stalling is a known problem with the JMicron controller, confirmed by online reviews.
Even other SSDs that don't outright stall may offer worse than conventional HDD performance in certain usage scenarios. They typically have a problem with random small-file writes, as reviewed on this page:
http://www.anandtech.com/storage/showdo ... =3531&p=25
The Samsung-based drives (Samsung SLC, OCZ Summit) only offer 1/2 to 1/3rd the performance of the conventional drives in the test. My attention was first drawn to this after reading a forum post 2 or 3 years ago by a software developer who used an SSD in his machine. During software development a compiler typically writes lots of small files and his (then available) SSD was reportedly 3 times slower at this task than a conventional disk. If that sounds bad, have a look at the JMicron results in the above test. 50 times slower than a conventional disk! This is a worst-case scenario, but still. Incidentally these results refer to the 'B' revision and RAID-0 implementation thereof respectively. JMicron, through OCZ and others, have released a string of enhanced versions of their controller, but by all accounts these have merely been workarounds for a fairly abysmal problem, not a redesign. The marketing push for them has been quite misleading too. For example some are touted as offering an internal RAID-0 architecture, but this only sounds good if you don't know that Samsung and Intel use an 8-way and 10-way implementation internally without making a big fuss about it, e.g. they are much more advanced than a 2-way RAID.
I'm a bit perplexed at the good small-file results in this thread, but must confess that I simply recommended the first benchmark I came across. It's one I've seen used in some reviews, but it might not be the best to highlight the issues with SSDs. Going over Anand's review again, I noticed he used IOMeter with some of his own custom scripts, which appears to give radically different results. Perhaps ATTO, even though using small files, does not cause random but sequential activity?
The second problem with SSDs is performance degradation over time. Some of them will offer their maximum speed only until their full capacity has been written once. Some may degrade further after that. This is partly an operating system-related problem. With conventional hard disks you simply overwrite any unused space when you save a file. With SSDs that space must be erased first, which is a tricky operation for the SSD controller to manage, because erasing can only be done in bulk for relatively large blocks at a time. However the very first time you use an SSD, it knows it's full capacity was completely erased at the factory and it can forego erasing it, until you've filled it to capacity or made file updates that add up to the capacity of the drive. This is actually quite sneaky as the drive will present it's best side to an uninformed reviewer fresh out of the factory, but may subsequently slow down forever after. On the other hand, if the operating system could tell the drive about files that have been deleted, the drive could save itself some work and come closer to it's fresh out-of-the-factory performance again. I believe the standards that are needed for this, the so-called TRIM command, are only just being laid down now though. It may or may not find it's way into Vista / Windows 7.
As to reliability, the jury is out on that. It seems intuitive that SSDs must be more shock-resistant than conventional disks. I believe the potential problem with SSDs lies elsewhere, namely their limited write-endurance. Individual flash cells can only be written to reliably so many times, about 10,000 times ballpark for MLC. SSD controllers are programmed to be smart about this and spread updates as evenly as possible across the full capacity of the SSD. I am not aware that this is something to be worried about as I believe or at least hope that manufacturers have done their homework, but to say that SSDs leave conventional hard drives in the dust may be taking it too far.