FORUMS: list search recent posts

Six 1GbE Link Agg Connection vs. 10GbE Connection to Switch

COW Forums : SAN - Storage Area Networks

<< PREVIOUS   •   VIEW ALL   •   PRINT   •   NEXT >>
Eric Hansen
Six 1GbE Link Agg Connection vs. 10GbE Connection to Switch
on Jan 30, 2013 at 11:36:53 pm

If you have an ethernet SAN with a switch, the 2 most popular methods of connection to the server are:

1. Link aggregation using a multi-port ethernet card on the server, up to 6 ethernet ports per PCIe card, to a switch that supports link aggregation.
2. 10GbE card in the server connected to a switch with 10GbE and 1GbE ports.

The way I understand it, the link aggregated connection creates multiple sockets over those connections. Does 10GbE work the same way? Can a 10GbE connection carry ten 1Gb sockets?

Haha, am I even using the right terminology? Steve?

Thanks

e

Eric Hansen
Production Workflow Designer / Consultant / Colorist / DIT
http://www.erichansen.tv


Return to posts index

Bob Zelin
Re: Six 1GbE Link Agg Connection vs. 10GbE Connection to Switch
on Jan 31, 2013 at 2:59:02 am

yes, but I am told that the Mac does not have the bandwidth to handle more than 2 10gig ports that are link aggregated together. So if you use a 4 or 6 port card, there is no benefit other than redundancy.

So, the solution is DIRECT CONNECT, where (just like a 4 or 6 port ethernet card without a switch), you put a 4 or 6 port 10gig card in a computer and run each of these ports to individual clients with single port 10gig cards. And if you look at solutions from Small Tree and ProMax and Studio Network Solutions that are using Linux boxes, they put in multiple 4 or 6 port cards to have more connections to different client computers.

Of course, all of this becomes moot at a certain point, because even the fastest drive arrays max out at 1500 - 1600 MB/sec, so having multiple 10gig clients running at 400MB/sec each, you hit the wall at the speed of the drive array - so solution - buy multiple fast drive arrays. And yes, you can stripe two big arrays together to get 2200MB/sec, but it starts to get silly (and expensive) at a certain point. You just buy multiple drive arrays, and split the load. And do you really have 12 people all working at 400MB/sec.

This is why 4K 3D dpx work is done with LOCAL STORAGE and not on a network.

And with all of that said, and observing the Interface Masters 10gig switch (and Arista, etc.) that are all pure 10GbE switches, I have never had the luxury to actually to a real world test environment with a switch like this and multiple 10gig clients, all pulling that kind of bandwidth, all off of the switch.


Bob Zelin



Return to posts index

Matt Geier
Re: Six 1GbE Link Agg Connection vs. 10GbE Connection to Switch
on Jan 31, 2013 at 5:30:36 pm

Hi Eric and Bob,

Here's some things we tend to think about when it comes to this subject of 1GbE vs 10GbE.

It's not true that the a Mac Pro has the limit of 2 x 10GbE Link Agg together. Where did you hear / see this from?

A Mac Pro 12 Core / with a decent processor config, and a minimum of 24GB of Memory (3 x DIMM - for a Video / Real Time Server), can easily handle 4 x 10GbE Link Aggregated to a 10GbE Switch. (For a more traditional "file server" you might find better performance with a 2 x DIMM configuration...Documentation on the DIMM slots can be found in the Mac Pro manual that touches on how the Mac can perform and utilize the memory channels differently.)

Six Ports (1GbE or 10GbE) will work fine;

If 1GbE, expect your aggregate bandwidth to be 100MB/sec x 6 = 600MB/sec...

If 10GbE(600MB/sec inbound / per link), expect your aggregate bandwidth to be the same as 4 ports (600MB/sec x 4 = 2400Mb/sec)
--

Nobody will push that for several reasons.
Everyone knows that it's the storage performance that knees off. The storage units today, are unable to keep up with what a 6 x 10GbE adapter could do now, assuming 6 x 10GbE clients connected.

Small Tree, and I'm sure others, have many deployments where 4 and 6 port 10GbE adapters are link aggregated out of the "Server" into an a a 1GbE/10GbE Combo, or even an ALL 10GbE Switch.

I can speak of many sites running anywhere from 5-10+ all 10GbE to the Mac, some have switches, and some directly connect to servers.

Matt Geier
(Video Networking Solutions Expert)
(Creative Design Workflow Consultant)
(Social Media Networks Consultant)
(Technical Video Industry Sales Consultant)


Return to posts index


Bob Zelin
Re: Six 1GbE Link Agg Connection vs. 10GbE Connection to Switch
on Feb 1, 2013 at 11:00:03 pm

Matt writes -

It's not true that the a Mac Pro has the limit of 2 x 10GbE Link Agg together. Where did you hear / see this from?

REPLY -
Myricom

Bob Zelin



Return to posts index

Eric Hansen
Re: Six 1GbE Link Agg Connection vs. 10GbE Connection to Switch
on Feb 2, 2013 at 5:17:56 pm

[Bob Zelin] "Of course, all of this becomes moot at a certain point, because even the fastest drive arrays max out at 1500 - 1600 MB/sec, so having multiple 10gig clients running at 400MB/sec each, you hit the wall at the speed of the drive array - so solution - buy multiple fast drive arrays."

so i guess my next question is, how do you handle adding more RAIDs in a 10Gb environment using something like the Titanium? i know you can add an expansion RAID or 2 to the Titanium, but at some point, you will max it out and need a new host system. So if you're adding a new host system, and you need it to connect to the same clients as the first host system, are you now at the point where you need a 10Gb switch?

e

Eric Hansen
Production Workflow Designer / Consultant / Colorist / DIT
http://www.erichansen.tv


Return to posts index

Bob Zelin
Re: Six 1GbE Link Agg Connection vs. 10GbE Connection to Switch
on Feb 5, 2013 at 1:06:31 am

what it appears that "everyone" is doing, is putting multiple 10gig cards into the server, and assigning static IP's to every port with seperate subnets, and then the clients get corresponding single port 10g cards. With a Small Tree Titanium linux server, or a ProMax Platform Server, or a Maxx Final Share system with a Cubix or Magma expansion chassis, you put multiple multiport 10gig cards in the server, and single port cards in the client computers (and if you have iMac's or Mac Book Pros for clients, use an ATTO or Sonnet Thunderbolt PCIe expansion chassis like the Echo Express), and now you can connect 10gig to multiple clients, without issue.

If you in fact need 24 clients all 10gig, you buy a full 10gig switch from Small Tree, Interface Masters, Arista Networks, etc. but these switches cost $15,000, and then you add the cost of the the single port 10gig cards - and then you have to consider doing LACP in order to get the bandwidth you need (as just described by Matt Geier from Small Tree). So it's easier and cheaper to just to the 10gig direct connect with a single multiport or several multiport 10gig cards in the server.

And with all this said, you ain't gonna have 8 - 24 clients all doing 400MB/sec on a single 16 bay drive chassis that can only put out 1500 MB/sec total bandwidth. REMEMBER, that companies like AVID with the ISIS 5000 spec out that you can only have ONE Myricom 10gig card per ISIS 5000 chassis. Now, "we" (all companies mentioned) can do more than that with the super duper chassis that are available to us (thanks to ATTO and Areca) but this ain't no magic trick. There is always a bottle neck.

Bob Zelin



Return to posts index


marcus lyall
Re: Six 1GbE Link Agg Connection vs. 10GbE Connection to Switch
on Feb 27, 2013 at 10:55:34 am

10gb switches starting to turn up on eBay.

http://www.ebay.com/itm/Fujitsu-XG2000C-20-Port-10Gb-Ethernet-Switch-PD-XG2...

Myricom cards also pretty cheap these days.

For the intrepid, the 2nd hand enterprise market is your friend... at least until you can afford the new stuff.

BTW.. Just watched 13 streams of Prores HD playing over 10gb in Premiere, all with 160 px gaussian blurs on them. In realtime.
Bowled over...



Return to posts index

<< PREVIOUS   •   VIEW ALL   •   PRINT   •   NEXT >>
© 2019 CreativeCOW.net All Rights Reserved
[TOP]