FORUMS: list search recent posts

SAN Latency Issues - 10 gig Copper vs. Fiber Channel

COW Forums : SAN - Storage Area Networks

<< PREVIOUS   •   VIEW ALL   •   PRINT   •   NEXT >>
David Tiberia
SAN Latency Issues - 10 gig Copper vs. Fiber Channel
on Apr 16, 2013 at 10:00:58 pm

Hi Guys,

Our post and production house has a very functional MetaLAN based SAN right now that is serving us well. We've got 4 - 1 Gigabit links and 2 direct connect 10 gigabit copper links. Our 10 gig links run as high as 400 megabytes sustained, and the 1 gigabit links run as high 90 megabytes sustained in real world situations (higher using MetaLAN's built in speed tests). We run AVID editing workstations and have no issues with multiple HD streams or dropped frames.

We have only one issue: Latency.

Now when I say latency, I'm talking about in basic tasks, mostly in operating system file browsing. For example, if I navigate to "My Computer" on Windows 7 and choose a SAN drive, if there's a few files and folders, no problem. If it's a big file (video file), it's fine. If there are 50 small files, the folder will take FOREVER to populate. 45 seconds, 2 min...a while. If we copy a big file, it goes fast. If it's 150 jpegs from a digital camera, it'll take 4 min. before the operation even starts. All our storage is currently housed in a dedicated server.

Obviously, we were expecting some latency when we planned this, but not quite to this level. Now here's the point, and the question that I could use some help on...

We're looking to upgrade our SAN sooner than later...looking to solve the latency issues. We had naturally planned on migrating to Fiber Channel, but everything that we've read says that Fiber Channel is as good as dead, and 10 gig ethernet is replacing it. From my stand point, I can't see that happening if the latency we're seeing on ethernet is real. From talking to others that I know, I'm under the impression that this is just the issue of copper ethernet. I've been told the mac is better with it than Windows (we're windows based).

Do you guys have any better experiences on 10 gig ethernet? Does it help to not be on copper, and move to 10 gig over fiber...or infiniband? I don't want to invest in Fiber Channel if it's dying a slow death...

Just curious if you guys have any experience with this...or if latency is an issue on your 10 gig ethernet links. If you're not having issues....what type of interconnect are you using on your links?

Thanks.

(I did repeat this post in the MetaSAN forum as well...I thought it applied to both groups, I thought I might get some different answers on both forums.)

- David T.


Return to posts index

Eric Hansen
Re: SAN Latency Issues - 10 gig Copper vs. Fiber Channel
on Apr 16, 2013 at 10:26:27 pm

I don't believe it's an Ethernet vs Fibre thing. i think what you're seeing is RAID's designed for large video files with a large block size, versus drives designed for things like databases, with a small block size. if you copy lots of tiny files to a file system with a large block size (your SAN RAIDs), it will seem like it's taking forever. but if you copied them to something designed for a ton of small reads/writes, like a server hosting a website, then it will go much faster.

can someone else chime in and confirm? it's been awhile since i've dealt with block sizes.

e

Eric Hansen
Production Workflow Designer / Consultant / Colorist / DIT
http://www.erichansen.tv


Return to posts index

Caspian Brand
Re: SAN Latency Issues - 10 gig Copper vs. Fiber Channel
on Apr 17, 2013 at 12:16:02 am

Even using locally formatted disks in Disk Utility on a Mac, smaller files take longer than larger files. I shoot a lot of time lapse as well as video on my DSLR, transferring a 16GB card full of time lapse images takes sometimes twice as long as a 16GB card full of larger .mov files.

-Caspian

Product Specialist
Studio Network Solutions


Return to posts index


Neil Smith
Re: SAN Latency Issues - 10 gig Copper vs. Fiber Channel
on Apr 16, 2013 at 10:51:29 pm

The latency issue may not be directly related to networking I/O speeds but to how you've got your RAID striping and block sizing configured ... you should send an email to Tiger-Tech and see what their recommendation is for a metaLAN SAN.

The other alternative to using a GigE and 10GigE SAN is to consider using an external PCI express topology from ExaSAN ... one of their 12 bay RAID 5 array delivers 1200 MB/s to the desktop ... we were demoing their A12 boxes at NAB last week running XSAN with three Macs attached and were getting realtime time performance with very low latency from 5K EPIC files and 4K ProRes4444 QuickTime files.

If you're in the LA area come over to our place on the old Warner Hollywood Lot in West Hollywood and we'll give you a demo of the ExaSAN/XSAN combination ... if you haven't seen a PCI express SAN in action, you'll be in for a pleasant surprise, both in terms of price and performance!

And just so you know, we're running a '4K MADE EASY' training day on Saturday April 27th on The Lot which will feature XSAN running over ExaSAN RAID with FCP X and DaVinci Resolve round-tripping ... the price performance of XSAN on Mountain Lion with ExaSAN hardware is pretty amazing.

Details of the training event below:

http://www.lumaforge.com/styled-2/index.html

Cheers,
Neil

Neil Smith
CEO
LumaForge LLC
shoot it. store it. share it
323-850-3550
http://www.lumaforge.com


Return to posts index

David Tiberia
Re: SAN Latency Issues - 10 gig Copper vs. Fiber Channel
on Apr 16, 2013 at 11:14:03 pm

I'll have to go back and take a look. I'm certain that we used their guidelines when we set up block size, etc. the first time. But it's worth a look.

I'm going to do a test on the server side as well in the morning to see if the latency persists when doing local copies...

- David T



Return to posts index

Caspian Brand
Re: SAN Latency Issues - 10 gig Copper vs. Fiber Channel
on Apr 17, 2013 at 12:51:40 am

David,

Some of what you describe sounds like the difference between NAS and SAN protocols.

[David Tiberia] "if there's a few files and folders, no problem. If it's a big file (video file), it's fine. If there are 50 small files, the folder will take FOREVER to populate. 45 seconds, 2 min...a while. If we copy a big file, it goes fast. If it's 150 jpegs from a digital camera, it'll take 4 min. before the operation even starts."

SAN connections communicate at the block level and can be used with 8Gb/16Gb Fibre Channel (using block level SCSI commands) or 1GbE and 10GbE with iSCSI. I'm not saying it's the only factor, but could be part of the equation. I'd venture a guess the folder you're stating takes forever to populate is nearly instant by comparison from an internal drive or even over a FW800 connected drive.

You said you're currently using Tiger Technologies MetaLAN, which is their solution akin to high performance NAS solutions. They also make MetaSAN iSCSI. With 10GbE connections already, talking at the block level to your shares may help reduce some of the latency you describe. Have you asked them about the performance differences one could expect between MetaLAN and MetaSAN on the same hardware?

Can you add additional 10GbE ports to your server/storage?

What 10GbE HBAs are you using in your high bandwidth direct connect clients?

Regarding Fibre Channel being dead, I don't know about that. With ATTO's 16Gb Fibre Channel cards and PCIe Gen 3, computers will be able to crunch on even larger files faster. We've seen SolarFlare 10GbE HBAs getting pretty much the same speed as 8Gb Fiber Channel in our testing on the Mac. Fibre has the reputation of being fast with very low latency, but there's also 40GbE now (though seemingly less available than Fibre Channel). Ethernet has the advantage of being able to run both block level (iSCSI) and NAS (AFP, SMB, NFS) protocols over the same physical connection when needed. They both have their pros and cons and are both very much alive IMHO.

Another thing to consider is that 10GbE over fiber optic cable uses the same cable as 8/16Gb Fibre Channel, and while this cabling costs more than copper, fiber is still where it's at for reliability in cable construction and distances, let alone cable space and faster speeds when available. So an investment in OM3 or OM4 fiber optic cable would be interchangeable between protocols.

https://support.qlogic.com/app/answers/detail/a_id/691/~/general-distance-s...

Regards,

Caspian

Product Specialist
Studio Network Solutions


Return to posts index


David Tiberia
Re: SAN Latency Issues - 10 gig Copper vs. Fiber Channel
on Apr 17, 2013 at 1:05:22 am

Caspian,

This is really good info.

I can put more 10 gig e cards in my box, that's not an issue. Also, I know that they are intel, I just don't know which model...we bought right at a transition time and I can't remember which model we got.

I was thinking about moving to Fiber Optic as an option, but wasn't entirely sure if it reduced latency when using 10 gig ethernet protocols when compared to copper ethernet...I've got experience with Fiber Channel, just not fiber ethernet.

Let me grab some answers to questions when I get to the studio in the morning so I can fill in some more pertinent details.

Thanks.

- David



Return to posts index

Yov Moor
Re: SAN Latency Issues - 10 gig Copper vs. Fiber Channel
on Apr 19, 2013 at 8:51:11 am

Hello I manages San server in my post office with metasan (fiber 8g, qlogic switch 8g) and also metalan for some computer attach with Ethernet 1g.
I like metasan and metalan for the simple deployment. But I manage feature film with almost 120000 sequence file in one folder! We use osx and windows, on Mac we avoid using finder for copy or move file because about latency and also sometime the volume hangout, so we most use the terminal for doing copy and move.
On windows part the explorer are more faster and stable but the cmd is also better.
The metasan configuration are not more stable or faster about latency than metalan, just faster for the bandwidth with fiber or other.
I think this latency problem are more about of the nature ntfs or hfs+ manage by metasan or metalan between computer, because ntfs and hfs+ are not made for sharing throught San structure compare to cxfs or stornext fs and so one.
Metasan and metalan are the same latency because they both used the Ethernet for metadata reading write acces to the volumes.


Return to posts index

David Tiberia
Re: SAN Latency Issues - 10 gig Copper vs. Fiber Channel
on Apr 19, 2013 at 8:56:49 pm

Hi everyone. Just to come back and keep people in the loop...I'm working through the issues and trying to do some benchmarking to try and quantify the issue.

To be honest, I haven't spent much time diagnosing the real issues for very long...a lot of these are coming to me as reports from our post team. After doing some of my own research and talking to Bernard at Tiger, I'm wondering if my server might be getting over taxed at high usage times, and causing the file browsing issues.

Someone was asking what adapters we're using for 10 gb...I've got an Intel x520-T2 in the server and Intel AT2s in the workstations.

Transfer speeds were in the upper 500 MB/s in testing today on our fastest disk RAIDS.

Still, I am going research into moving some workstations to FC....we're wanting to potentially do more GFX and graphics design work on some workstations connected to the SAN.

Thanks for the thoughts and please let me know if you guys think of anything else...I'll keep everyone posted on how it turns out.

- David



Return to posts index


David Gagne
Re: SAN Latency Issues - 10 gig Copper vs. Fiber Channel
on Apr 24, 2013 at 2:32:14 pm

I know it sounds silly but also check for file fragmentation and/or a nearly full drive. That can easily cause unnecessary slow down.


Return to posts index

Elvin Jasarevic
Re: SAN Latency Issues - 10 gig Copper vs. Fiber Channel
on Apr 27, 2013 at 8:45:46 am

Hi David,

Eric is right but also wrong when he says that he doesn't believe in FC vs 10GbE. Yes it is also to do with files, how they are positioned, size but also to do with FileSystem!

In general FC has lower overhead then 10GbE but in practice with right system, things could be very different.

Here it comes NAS vs SAN thing.

For example with DDP - Dynamic Drive pool (which is IP SAN), we can use multiple 10GbE from single Myricom or Silicom card and get max performance of around 1500MB/s from Z800 or top the range MacPro which can enable us to even push 4K uncompressed in the real time which we have demonstrated at NAB this month. This our technology we call it MCS - Multiple Connections per Session. You can of course use Dual 1GbE, similar to ISIS5000 with the difference that with our Storage System we can push even more data (up to 10Bit HD uncompressed with 2 cables) but also you can combine up to 4 x 1GbE from the same PC/Mac.

Many NAS people call this Bonding or Teaming which is wrong. Only with SMB 3 which is implemented Windows 8 and Server 2013 this is possible, but in reality with NAS systems this is used for Load Balancing or Failover as NAS systems have lots of overhead especially when you using large number of clients and multiple streams (you get more latency and not sustained performance).

DDP great speed comes because we are IP SAN and have our own iSCSI and our own FileSystem.

As you are using Avid's I can tell you that we have something similar to ISIS and Unity and that is Avid colored bin locking with a difference that you do not need to setup your projects or users in order to share the data unlike some other solutions, and this means your PC can share same projects with a MAc and have red or green indicators on their bin, depending who opened first bin).

So why don't you contact our distributors in US Cinesys Inc. and ask them for a demo? On our Facebook page we posted few videos from clients running similar setup to yours so please check it and have good day.

Elvin


Return to posts index

<< PREVIOUS   •   VIEW ALL   •   PRINT   •   NEXT >>
© 2019 CreativeCOW.net All Rights Reserved
[TOP]