Pcie vs Raid Card
I'm trying to get my hands around some of this new technology
so could you please explain the differences and or benifits of each
of these technologies. What I don't understand for example is
if you use a PCI-e extender to connect to a jbod don't you need
a raid controller card in the jbod? So where is the benifit
because won't that card in the jbod give you the same performance
hit as if it were in the host?
AW come on don't you miss the days of cursing out the gvg 200 pod?
They never let me touch the pod - Chief Engineer only...job security.
TnT Video Services, Inc.
Fort Lauderdale, FL
most of the "chief engineers" never knew how to even use the GVG200 pod. It was all automatic setup, there was no brain surgery to using the pod.
to answer your question, historically, everyone has been using RAID cards for drive arrays. And as time went on, we wanted more and more drives. In the SCSI days, you could plug in 16 hard drives (SCSI ID 0 - 15). But today, with 16 bay RAID arrays common, people want a lot more than 16 drives. So SAS/SATA expansion came out, which would allow you to daisy chain multi drive arrays, so you could have LOTS of disk drives hooked up. Wonderful cards like the ATTO R380 and Areca 1680x would allow up to 128 disk drives to be run by a single RAID card controller (think AVID, Maxx Digital, Sonnet, etc.).
Some companies didn't want to use SAS/SATA expanders in their products, and felt that expansion of the PCIe buss in the computer was a better way to go. Companies like Cal Digit (think SuperShare) and JMR (who actually uses the ATTO R380) felt that having PCIe expanders in their chassis would allow users to have dedicated host controllers in each chassis (for 8 or 16 drives), and then you could use PCIe expansion to get better performance. One of JMR's wonderful 16 bay chassis has PCIe slots INSIDE the drive chassis, and they have TWO ATTO R380 cards in the chassis, each which run 8 disk drives. This is how they get their amazing speed performance. If you want to expand beyond 16 drives, you buy another expansion chassis, and use a PCIe expander (built into the JMR chassis) to hookup the next chassis. Is is fast - you bet it is. Does it cost more money - you bet it does.
As you know, all decisions are made by "how much does it cost", and super cool solutions cost the most money.
Also remember that PCIe expander chassis have been on the market for quite some time. I was the original user of the Dulce ProEX, which was a PCIe expander for MAC computers, so I could hookup multiple Dulce RAID arrays (which do not support SAS/SATA expansion). But this chassis was about $3000 for the empty box, and every time you wanted to add more drives, you not only have to buy the expander chassis (for $3000) you had to buy another Areca 1221x SAS/SATA host adaptor. When products like the Areca 1680x and ATTO R380 came out, that allowed you to daisy chain chassis, it was cheaper (and required less hardware), and no one wanted to buy the ProEX anymore.
Again - cost is everything. With that said, with all the super cool solutions out there right now, when 6 Gig drives and components become readily available, many of these current expensive solutions will seem obsolete, even if they are super fast right now. I know quite well, that "everyone" wants SAS drives (instead of SATA) but they are too expensive - same with SSD drives. Same with PCIe expanders. There are all kinds of wonderful cool products out there, but if something cheaper is "around the corner" (less than 6 months to a year away), people will never spend the extra money.
This is why I have seen Maxx Digital Final Share so wildly popular - when you compare it to any other shared storage solution on the market, it really isn't that great, but it does allow you to share multiple FCP systems, and it costs less (dramatically) than anyone else on the market - so people are more than happy to deal with it's limitations and workarounds (unless you are Disney, or a major TV station, in which case you usually buy XSAN or AVID Unity).