Shant wrote:
I've got 2 500g SATAs in the computer and I was tempted to make a raid array but I put redundancy/insurance well above speed
I have the same concern, which is why I build RAID 0+1 (RAID 10) arrays for my machines. RAID 0+1 is just what it sounds like... striping for performance and mirroring for redundancy. It uses 4 drives (minimum), but it doubles your performance while giving you more or less instant duplication of your drives.
Shant wrote:
I'm assuming defective drives out of the box are much more rare with SSD, no?
I don't know any statistics, but hard drive infant-mortality failures are not super-common. There will be the occasional early failure with SSDs, too.Whether it takes a week, a month, or a year, or a decade for a failure, you still need backups. There is no permanent way to store data without re-copying it from time to time. Even optical disks develop errors after ten years or so (sooner if abused, obviously).
Shant wrote:
I'd built a raid array on my old computer, and perhaps things have changed now, but at the time if one drive failed, your data's gone.
It depends on the type of array. Most professional arrays are far more reliable than single drives because they use some form of redundancy. There are several types, but RAID 1, 5, and 10 are the most common redundant types. The truth is that RAID 0 increases the likelihood per megabyte that you lose data (and maybe all of it) so, yeah, if the data is important then you want a notch up in sophistication.I mentioned RAID 0 and 10 because they give you the highest performance increase without a fancy (expensive) controller.
Shant wrote:
he solution would just be to get another drive and keep that one to backup the raid array
Exactly. Inexpensive external drives rock for that.
Shant wrote:
I'll definitely do that eventually because you're right, that's a cheap way to damn near double the speed. You can't beat that.
Right. Even better, it tends to be a bottleneck on most PCs, so doubling your performance there will make the computer "feel" much faster at many tasks. Assuming some kind of Core 2 cpu, I would say that disk performance is the second or third most significant performance enhancement, after adding lots of memory and maybe a decent video card with a lot of memory on it. Memory is 1,000 times faster than disk, so memory is Numero Uno.
Shant wrote:
I built the new one and literally in 6 hours I went from one to the other.
OK, that's a good comparison. But, wait...
Shant wrote:
it took me about 30-35 minutes to convert an mpeg movie to DVD. Now it takes about 7-8 minutes.
That's a 500% performance improvement in handling large data! What's your beef, man?!That's a huge speedup. There are a few things that contribute to it, but memory and disk are the biggest factors. CPU and system bus after that.
Shant wrote:
I suppose another decade and I'll be able to shave 15 seconds off my startup time...
My netbook boots Windows XP in about 15 seconds, and that's with a puny Atom CPU and a slow-as-molasses 1.8 inch disk drive. Boot time can vary widely, but the more striped-down your PC is, the faster it will be.What operating system are you using?
Shant wrote:
I ran data transfer tests on my old system's hard drives. I had several hard drives in it, 2 of which were striped RAID 0. The newest non raid drive transferred at around 50MB/sec. The raid array transferred at 80mb/sec, so you're right it's nearly double the speed.
Nice improvement!Yes, it would be interesting to compare your new PC. It will be interesting to see how much faster than 50MB/s you get, but you should get a noticeable increase if you are going from PATA (old ATA) to SATA.
Shant wrote:
On paper my new drives are supposed transfer data 6 times faster than my old drives. I'm betting it's well less than double.
You will never know the full story. It is nearly impossible to test the transfer rate of a decent disk drive at home... you would need some expensive gear to get anywhere near the drive's limit. Remember that your system bus and disk controller are slower than the drive itself. Plus, every time the spindle has to stop, seek a new sector, then start again, slows it down.So, yes, doubling your overall speed would be terrific. You will see the greatest improvement by copying a large, unfragmented file, from one drive on one controller to another drive on another controller. Even with that test your motherboard's system bus may be the limiting factor.
Shant wrote:
hmmm...I can't find that utility at the moment...do you know of a simple HD data transfer utility I can download?
Hmmm... do you mean a benchmarking tool, or something to copy the complete contents of one drive to another?Check TomsHardware.com or ZDNet.com for a benchmarking tool, then pay attention to exactly what you are benchmarking. If you want to copy an entire drive, take a look at Acronis TrueImage. It is commercial software (maybe $50 for home computers), but it makes upgrading disk drives and making perfect backups a cinch.
Shant wrote:
Bullwinkle wrote:
It's sort of a long topic, but wires do not transmit electricity at the speed of light...
OK this explains a lot. I had always assumed that wires did transmit at near light speed, and that it was only the mechanics of hard drives acting as a barrier to that speed. Why don't they transfer at light speed, and roughly how fast DO they transfer (relative to light speed)?
Didn't I just say it is a long story?! OK... you know how they have highways in L.A. that are eight lanes wide? 8 lanes @ 60 mph with an average of, say, 100 feet of road length per car ... they should be able to carry something like 400 cars per mile per minute when they are pretty full but not bumper-bumper. Now picture trying to drive 8 cars from the office parking lot to El Torito 1 exit away (let's call it 1 mile just to make the numbers easy). So 8 cars / 400 cars per mile per minute = 1.2 seconds to get from work to lunch, right?:)As you can see, it's more complicated than just the width of the pipe. You have on-ramps and off-ramps to deal with, traffic, red lights, etc. It could take ten minutes to do that drive, depending on factors that have nothing to do with the bandwidth of the highway.When electricity flows through a wire it generates a magnetic field around the wire. Building that magnetic field takes time. When you stop the flow of electricity, it takes time for the field to collapse. The field is a form of energy, just as the electricity is. Think of building the field as "filling up the pipe" with energy, and collapsing the field as "draining the pipe". This filling and draining (called "inductance") is pretty slow compared to the speed of light. In order to send a "1" down the wire, you start at 0 voltage, increase to whatever a one is (say, 3v), then wait until the field stabilizes so that the voltage at the receiving end of the wire is a nice, steady, 3 volts, then drop the voltage back to 0 again. But the collapsing magnetic field continues to generate voltage at the receiving end until the field collapses so, again, there is a time lag before the receiving end sees a nice, steady, 0 volts.The inductance of the wire is comparable to the on-ramps and off-ramps of the highway.But there is more... when you put two wires side by side, the magnetic fields interact with each wire and induce currents in each other! So one wire that is supposed to be at 0v receives part of the magnetic field from the next wire, which partially raises the voltage in the wire that should be at 0!The beauty of digital signals is that we can throw out any signal that is not, say, at least 2.5 volts (1) or less than 0.5 volts (0). Anything in the middle is "noise" and we just ignore it, waiting until we see a clear 0 or 1. That waiting takes time. How much time depends on many factors, including temperature. So, in order to make sure that we get a good, clear, signal, we wait a little extra. Now, in order to move millions of those signals through a computer per second, we have a LOT of "red lights" and on/off ramps to consider, so computers use a "clock" to move everything in one complete step at a time. It's like having a red light at every intersection. When you get a green light you accelerate (charge the field), drive, then decelerate (collapse the field) at the end of the block, wait at the red light until the next clock signal, then go again. If you are late and miss your green light, then you may have to sit and wait through almost a complete cycle before you have a chance to go again (an extra WAIT state, in computer lingo).Like traffic, if our signals collide they causes accidents (which show up as blue screens or other quirks), so we have to be ultra-careful that EVERY signal can get to it's next "block" on each cycle of the clock. That means more waiting.All of this waiting is what makes travel through a computer slow. How slow? Slow enough that super-computer manufacturers used to use oscilloscopes and wire cutters to trim each wire to exactly the right length for that wire in order to minimize wasted clock cycles. It's VERY slow compared to the speed of light.Does that make sense?
Shant wrote:
shouldn't it be much cheaper to manufacture a SSD than a hard drive, and shouldn't that be reflected in retail prices relatively soon?
Semiconductor manufacturers (and many other commodity manufacturers) set volume pricing based on a Non-Recurring Engineering cost (NRE) and a per-unit cost. The NRE is, roughly, how much it costs to design, tool up for, fab, and test the first successful batch. After that, the per-unit charge kicks in, which covers cost of making the silicon, testing and other processing, and a bunch of other recurring costs. So the NRE cost of a cpu might be $10 million and the per-unit cost might be $25. You have to sell a lot of them before the price to the customer gets anywhere near that $25. In fact, long before the cost drops that far, a new CPU is released, and the customer starts paying for that NRE charge again.Costs don't drop to their rock-bottom for a couple of years, at which time the unit is typically discontinued and replaced with something newer.Now, as you say, semiconductor densities increase over time, which reduces per-unit cost and improves performance. But disk drive technology is constantly improving as well, so it isn't that easy for semiconductors to catch up to disk price/performance ratios.In fact, there is more room for improvement in disk drives than in the current way that we make semiconductors. We are already making semiconductors so small that we cannot draw the lines with a laser beam because it is too fat! The semiconductor manufacturers have to fire individual beams of electrons to make a thin enough line (e-beam lithography). That, in turn, requires a tool that is essentially a small linear accelerator. If that sounds mondo expensive, you're right!So expect disk drive price-to-performance to continue to improve at its current rate, while semiconductor price-to-performance is already slowing down.In other words, until we have a MAJOR shift in the way that we make semiconductors, the price-to-performance for solid state memory will never beat hard drives. Eventually that technology shift will happen, but they've been working on it as long as they have been making semiconductors (roughly since the early 1960's) and they haven't come up with a good solution yet. IBM has a computer that uses super-cooled lead (Josephson Junctions) instead of silicon semiconductors to make a super-dense computer the size of a grapefruit, but it requires a refrigerator the size of a building to make the liquid helium that chills it... it's not exactly home computer stuff. They made one, loaned it to the NSA, and that's it. For now, at least.
Shant wrote:
if the SATA bus were not a limiting factor, how much faster would a RAM drive be than SATA?
Good question. If we skipped the drive controller completely, then how would we attach our RAM drive to our computer? You probably know about "RAM disks" which are just software partitions in system memory that act like disk drives. Those things ARE fast -- 1,000x disk drive speed. But they have to use a controller as well -- the memory controller. The memory controller is much faster than the disk controller, but it cannot be put on a long cable. It pretty much has to be on the motherboard -- and right next to the chipset at that. Gotta keep those ultra-high-speed wires short!So we could expand our memory controllers to hundreds of gigabytes, and that is precisely what 64-bit versions of Windows allow. Most computers built so far are limited to the 4GB that 32-bit Windows maxes out at. But newer machines are increasingly being developed that can take much larger memories. Modern servers can take 64-128GB of memory, or more... if you can afford it!So the way to get balls-out performance from solid state memory is by increasing system RAM rather than by using SSDs. It will cost you, but it will be fast.
Shant wrote:
how do you use such a drive if it loses memory when you turn it off?
One way is, as you suggest, to load the drive every time you boot. You could use it for a swap drive, or maybe load up your game files on it and have an extremely fast "disk" while playing your game. But the disk would be "created" (by copying files from a hard disk) every time the system boots.Another way is to use battery backup on the RAM drive. As long as you keep power to your computer, the battery backup will keep your data alive. But don't unplug the computer and put it in the closet for the summer, then expect your data to still be on the RAM drive when you plug it back in. In other words, a RAM drive is a temporary storage device, not a real replacement for a hard drive or a flash drive. That's ok, because most folks cannot afford to have a RAM drive that is much larger than a USB key drive, anyway.
Shant wrote:
You mean I could create a RAID array with nothing but software??
I will give that a qualified "yes". Yes, you can. You may or may not need a different version of Windows to do it, but it can be done. Windows XP pro can, I think, do it, and it appears that Windows 7 pro can make RAID 0+1 as well. Interestingly, it looks as though all versions of Windows 7 can make a RAID 0 array, which is probably because Microsoft knows that it is their best chance to give their customers a cheap performance boost.OK, now there is one more "catch". You can mirror (RAID 1) your system drive, but striping (RAID 0) is more complex. You need Windows installed to be able to build the array before you can install Windows on the array. So that means either two installations of Windows and a third (or fifth) disk drive, or a RAID controller. You could use any old disk drive from an old computer as the system drive to build the array, so that might be a solution for you.See, I told you it was a long story!Hope that helps.B