Gigabyte speed - Is it real.

deranged

Active Member
I upgraded at home to gigabyte network speeds and it seems I have been made a fool.

The whole goal was to burn DVD's at 16x across the wire. A gigabyte network should have more than enough bandwidth to do such a task.

So after be dissapointed at home, I did some real down and dirty testing at work. We have some pretty heavy servers and gigabyte switches some not on the main network yet, waiting for approval of down time. I could not get any benchmark program, ftp, windows file transfer to produce any kind of impressive speeds over gigabyte ( copper and fiber - CAT 6 and CAT 5e )

So my question is yes it is faster, But is anyone actually seeing gigabyte speeds ?
most of my tests fell way short in the under 300mb range.

StevenE
 
Keep in mind that other factors such as cpu speed, HD speed, memory, network card chipset, and other potential issues such as bad cabling, duplex/speed mismatch can and will cause big slowdowns.
 
I believe that "Gigabit" refers to full-duplex. I think it is only 500 megabits in each direction.
 
Sorry to be nit picky, but it's gigabit (Gb), not Gigabyte.

Many people make this mistake. When discussing memory and hard drives, we measure in Bytes, and when discussing networks we measure in bits.

Therefore, 1 Gb per second is equal to 128 MB per second when looking at a maximum theoretical transfer rate. That said, you would never see that, it's a theoretical maximum in a perfect world. There are too many other factors to consider, and more than likely your network is not the bottleneck.

I believe your 16X transfer rate is 21.6 Mb, so again, the bottleneck probably isn't the network. Your hard drive seek time, OS, and depending on how you're doing your tests, the TSRs such as NAV running have a large impact.

Gb networks are common place, and we regularly etherchannel dual or even quad links on Cisco gear to see 2-4Gb for uplinks or servers. We're now looking at Category 6A and 10Gb networks for our future buildouts. However, Cat 6A standards aren't even agreed upon by the IEEE committee.

In short, yes it works.
 
I see my Gb network working just fine between the 3-4 systems I have in Gb. But be aware that the wiring is a factor and Gb requires 4 pair on your cabling, so your network cabling does make a difference. It's actually only 400Mbps (x2 for full duplex), so you won't see true Gbit speeds, nor wil you get that in a single direction...
 
Stinger said:
Sorry to be nit picky, but it's gigabit (Gb), not Gigabyte.

Many people make this mistake. When discussing memory and hard drives, we measure in Bytes, and when discussing networks we measure in bits.

Therefore, 1 Gb per second is equal to 128 MB per second when looking at a maximum theoretical transfer rate. That said, you would never see that, it's a theoretical maximum in a perfect world. There are too many other factors to consider, and more than likely your network is not the bottleneck.

I believe your 16X transfer rate is 21.6 Mb, so again, the bottleneck probably isn't the network. Your hard drive seek time, OS, and depending on how you're doing your tests, the TSRs such as NAV running have a large impact.

Gb networks are common place, and we regularly etherchannel dual or even quad links on Cisco gear to see 2-4Gb for uplinks or servers. We're now looking at Category 6A and 10Gb networks for our future buildouts. However, Cat 6A standards aren't even agreed upon by the IEEE committee.

In short, yes it works.
Sorry to be nit picky, but it's gigabit (Gb), not Gigabyte.
Okay, yes I know it is gigabit and not Gigabyte, just poor explanation.

Therefore, 1 Gb per second is equal to 128 MB per second when looking at a maximum theoretical transfer rate. That said, you would never see that, it's a theoretical maximum in a perfect world. There are too many other factors to consider, and more than likely your network is not the bottleneck.

If you want to measure the speed in MB, I was not able to get over 14~18MB with any of the test I was running.

I believe your 16X transfer rate is 21.6 Mb, so again, the bottleneck probably isn't the network. Your hard drive seek time, OS, and depending on how you're doing your tests, the TSRs such as NAV running have a large impact.

I don't agree, It did the tests on my home machines which are no slackers. And I can burn 16x fine locally on the machine, And locally on the storage server. As a matter of fact when I started testing the machine individually I was surprised at how high the benchmarks actually are.
And I know it's againts the golden rule, but i do not run any anti-virus, any anti-spyware or any kind of firewall on my machines. Running that software just takes the snapiness out of a system. And I tend to use whatever resources are available on my machines.

Gb networks are common place, and we regularly etherchannel dual or even quad links on Cisco gear to see 2-4Gb for uplinks or servers. We're now looking at Category 6A and 10Gb networks for our future buildouts. However, Cat 6A standards aren't even agreed upon by the IEEE committee.

I agree yes it works, And at the office I am responsible for the care of terabytes of information, I have servers that can move a 100gb of data between local raid volumes in seconds. But when testing the systems that are connected to either a cisco high end gigabit switch or a high end gigabit HP routing switch, that are connected within 3 feet of the servers using cat6. I was extremely unimpressed with the network speed. With the tests I did at the office, I came up with numbers that basically are only 2 to 3 times faster than 100mb. I even spent time on the phone with one of the benchmark companies and HP verifying I had everything setup as they would like.


I do have one question that I seem to have unanswered at the moment. As far as connection speed goes, I know gigabit adapters connect at variable speeds as supported by the wire. But, is that actuall connection speed listed anywhere ?

I connected 2 intel gigabit adapters with a cat3 50 foot patch cable. They connected and display 1000mbs, even though the actuall test speeds with data was a lot slower than previously.

StevenE

In case anyone was wondering, all this testing was not just for fun. My company is about to start a new project and I needed real world data transfer #'s.
 
I don't agree, It did the tests on my home machines which are no slackers. And I can burn 16x fine locally on the machine
It's not a matter of agreeing or disagreeing, it's whatever the specs are of the drive. I thought I read somewhere that 16X equates to 21.6 MBps, but you'd have to check your drive specifications.

I have servers that can move a 100gb of data between local raid volumes in seconds.
Key word is local.

I was extremely unimpressed with the network speed. With the tests I did at the office, I came up with numbers that basically are only 2 to 3 times faster than 100mb.
Have you actually ran tests on 100Mb to prove that theory? You'd never see 100Mb. Your tests should be a bit more methodical. Compare the same test on both 100Mb and 1000Mb networks configured the same with the same devices. Record actual results.

I do have one question that I seem to have unanswered at the moment. As far as connection speed goes, I know gigabit adapters connect at variable speeds as supported by the wire. But, is that actuall connection speed listed anywhere ?
No. It doesn't connect at variable speeds, it connects at 1000 Mb, and based on the laws of transmission, the packets perform as best they can. Nothing sits on the wire measuring how fast the packets fly and updates the link speed, as that itself would introduce latency.

I connected 2 intel gigabit adapters with a cat3 50 foot patch cable. They connected and display 1000mbs, even though the actuall test speeds with data was a lot slower than previously.
Kidding, right? Or was this a test to show a slow transmission with out of spec cable?? You shouldn't even have a category 3 cable anywhere near this equipment. In fact throw it out, it's useless to you at work.

I'm of course assuming that everything looks good on the switch and the NIC. If these are high end switches, I would get into the switch and verify the port has connected at 1000 Full Duplex. Occasionally NICs don't negotiate speeds / Duplex properly with the default of the port being auto-negotiate. You can force the speed on either side to correct that.

I think the piece you're missing is that when you put many different components together for testing, you'll only be as fast as your slowest component. A true analysis would follow the packet and list the specifications (Mbps or MBps) of each component.

I'd be happy to help you out more if you gave specifics starting with model numbers. How much speed do you need? What prompted the purchase of the switches your company bought? Have you looked at channeling the ports for additional speed? What are you using to measure the results in your testing?
 
It's not a matter of agreeing or disagreeing, it's whatever the specs are of the drive. I thought I read somewhere that 16X equates to 21.6 MBps, but you'd have to check your drive specifications.

I have servers that can move a 100gb of data between local raid volumes in seconds.
Key word is local.

I was extremely unimpressed with the network speed. With the tests I did at the office, I came up with numbers that basically are only 2 to 3 times faster than 100mb.
Have you actually ran tests on 100Mb to prove that theory? You'd never see 100Mb. Your tests should be a bit more methodical. Compare the same test on both 100Mb and 1000Mb networks configured the same with the same devices. Record actual results.

I do have one question that I seem to have unanswered at the moment. As far as connection speed goes, I know gigabit adapters connect at variable speeds as supported by the wire. But, is that actuall connection speed listed anywhere ?
No. It doesn't connect at variable speeds, it connects at 1000 Mb, and based on the laws of transmission, the packets perform as best they can. Nothing sits on the wire measuring how fast the packets fly and updates the link speed, as that itself would introduce latency.

I connected 2 intel gigabit adapters with a cat3 50 foot patch cable. They connected and display 1000mbs, even though the actuall test speeds with data was a lot slower than previously.
Kidding, right? Or was this a test to show a slow transmission with out of spec cable?? You shouldn't even have a category 3 cable anywhere near this equipment. In fact throw it out, it's useless to you at work.

I'm of course assuming that everything looks good on the switch and the NIC. If these are high end switches, I would get into the switch and verify the port has connected at 1000 Full Duplex. Occasionally NICs don't negotiate speeds / Duplex properly with the default of the port being auto-negotiate. You can force the speed on either side to correct that.

I think the piece you're missing is that when you put many different components together for testing, you'll only be as fast as your slowest component. A true analysis would follow the packet and list the specifications (Mbps or MBps) of each component.

I'd be happy to help you out more if you gave specifics starting with model numbers. How much speed do you need? What prompted the purchase of the switches your company bought? Have you looked at channeling the ports for additional speed? What are you using to measure the results in your testing?
I don't agree, It did the tests on my home machines which are no slackers. And I can burn 16x fine locally on the machine
It's not a matter of agreeing or disagreeing, it's whatever the specs are of the drive. I thought I read somewhere that 16X equates to 21.6 MBps, but you'd have to check your drive specifications.

I was stating, I disagreed my machines are slow. ( HD or memory etc ). As I stated, Locally burning 16x DVD's is no problem. As a matter of fact, in the past I have burned a 16x DVD and a 8x DVD at the same time with no delays. Just coming across the network is an issue.

I have servers that can move a 100gb of data between local raid volumes in seconds.
Key word is local.
I just wanted to explain these are the servers I am testing with in the office.

Have you actually ran tests on 100Mb to prove that theory?
Actually yes I did on friday, and the best I got on speed improvement for gigabit was 3.4 times what I did on 100mb. ( using same servers + same switches ).
Though I do admit this was done by setting the switch and cards manually at 100mb.

Kidding, right? Or was this a test to show a slow transmission with out of spec cable??

No not kidding, I did it as a punishment test, I wanted the network cards or the switch to produce any kind of error. To my surprise they did not produce any errors just the data rate was slowed. And yes cat3 is garbage, I was shocked when I found one buried in the closet. So I thought I would use it in a test.

Okay maybe variable speed was misleading. I was talking about the nics ability to adapt to the best possible tranmission rate based on the connection medium.

At first I was using our main switch, A cisco 6500 series ( 6506 I believe ). Thinking the standard network traffice might be affecting the results I started using a HP 3400 switch. With only the 2 new servers on it, then I decided the heck with the switch and connected to the 2 servers directly together.

The 2 servers are brand new HP ProLiant DL580's with 3 external Raid units each,
The servers have both copper and fiber 1gig ports.

The software HP had me download for testing was chariot or something like that. I also did windows copying and FTP transfers. ( ftp always produced the best results.)

HP even admitted that i was not going to see what I was expecting, Even with optimal conditions they said I would only achieve 5 to 6 times a 100mb connection.

My statement here is, I think the gigabit technology is very misleading. I would expect at least 8 to 9 times the speed increase ( 10 being the perfect # ).

I haven't done any sophisticated testing on my home network yet. As I am under a deadline at work to get the new servers into production. Along with real world testing data before the servers go live.
I was just wondering if anyone else was unimpressed with the gigabit technology as I was.

StevenE
 
Steven:

Thanks for this post. I thought it was very informative and even though I have nothing to contribute to it, just wanted to say that I actually learned a lot from it and the following replies.

I would like to see more of this type of posting at CocoonTech! Great job ;) .

Regards,

BSR
 
At first I was using our main switch, A cisco 6500 series ( 6506 I believe ). Thinking the standard network traffice might be affecting the results I started using a HP 3400 switch. With only the 2 new servers on it, then I decided the heck with the switch and connected to the 2 servers directly together.

The Cisco 6506 Configuration is a very big piece of the puzzle. If you have say for instance Portfast enabled on a switch port connected to another switch, it will cause issues. The port needs to be specifically configured for a switch. I always force speed and duplex on such connections, even if it appears to negotiate correctly automatically.

In your enviornment, you really should consider Gb etherchannel between the switches, as well as investigating it to your servers. I haven't looked at Gb etherchannel cards for servers at all, just 100Mb etherchannel. I'm not sure if the servers could even keep up with 2 Gb.

In a correctly setup enviornment, with servers that have buses to keep up with the network speed you will see a large increase.

One idea you might want to look into is loading MRTG. I love MRTG for SNMP monitoring and tracking. It will give you performance reports on any device that has SNMP capability. For you home automation folks out there that are reading this as well, it has fun capabilities for us too. I use it to monitor my Tivo, switches (linksys and Cisco) and computers in the house.

Example of Tivo Reports

It can be loaded on Windows or Linux, but was designed for Linux. Someone put together an excellent package for Windows that is just as functional as Linux installs at: Windows MRTG Bundle

If anyone is interested, I'd be happy to help them on setting it up as well. I like to know how my machines are performing.
 
I couldn't agree more with what Stinger said. As I mentioned earlier, there are so many roadblocks which could cause these slowdowns, unless you have a state of the art machine at home, there are going to be plenty of components which can't handle gigabit speeds (i.e. IDE/PCI). Gigabit speeds are possible, but probably only in a perfect environment.

check out this page for some more info on how these chipsets and components can make a difference, it's a pretty interesting article:

http://www.accs.com/p_and_p/GigaBit/challenges.html

Another good article:
http://www.networkworld.com/research/2000/0320revgig.html
 
Ahem, I think you guys are missing a key point here... and that is, the Gb Ethernet spec is measurement of BI-DIRECTIONAL FULL-DUPLEX traffic speeds, and assumes full duplex using 4 pair cabling (2 transmits, 2 receives).

On a large SINGLE-DIRECTION transfer such as a file copy or DVD BURN, you will see only ONE HALF of the "rated" transfer speeds (400Mbps max).

In addition, if the systems are connected through a switch rather than point-to-point, then you have switching and other systems contention (broadcasts, interswitch traffic, etc.), adding further latency. Even if you add additional connections, such as Cisco's etherchannel, you still introduce overhead and can't use 100% of the bandwidth available. But wait! This is Ethernet - 100% of the bandwidth is NEVER available! It is a theoretical value!

Large SAN and similar storage systems use multiple wide-channel short-range (ie. very local) transfer busses that allow huge amounts of data to travel very quickly in one or sometimes both directions. You won't see these drive arrays hooked up with a single Gb connection to their controller, though!!

So, 3.4 x 100Mbps is exactly what I'd expect for a single-direction transfer on dedicated Gb ethernet link, and in fact, I think that's pretty good. I'd use that as your rule of thumb.
 
Well,

I am glad some have found this informative, it certainly has opened my eyes in regards to what to expect out of the box.

Anyway, I have spoken with Cisco and HP concerning settings and have optimized everything according to thier wishes.

Still not impressive speeds though, Faster, and If I do a transfer using 1 very large file I do get higher number speeds. The switches actually do not seem to impose any overhead in the tests I have done with the servers connected directly together or going through the Cisco or HP switch. Just for fun I did use a netgear 1gig switch for one of the tests, It did add some overhead.

The Conclusion is about 600Mbs+ in one direction with a single large file test.
Otherwise a little over 450Mbs with the type of files I am transferring.

HP and Cisco have been a lot of help, I did even try Intel server adapters ( copper ), they performed about the same as the built in copper adapters in the HP servers.

I still think 1 gigabit network technology is misleading, and should not be so difficult to reach expected speeds. ( if ever. )
Granted I did achieve a 6.9 percent increase over 100mb after Cisco and HP had me change some settings, and transferring a single large file.

As far as home goes I have to 2 HP XW 8200 3.4GHZ Zeon workstations with 4 Sata drives on each with a raid 5 controller. And a Netgear 1gigabit switch ( I took it to work to test ) :p.

I guess it comes down to the fact that a normal installation, either by a home user or a IT dept in a small company, is not going to have the resources or ability to correctly setup a gigabit network to get the best performance. And out of the box the performance is not what you would expect. In my case my company needed to know exactly what could be expected from these servers.

As a side note, I re-did the tests with 2 100ft cat5e patch cables to the switches. ( extra coiled on the floor nicely ) and 2 100ft cat6 cables and did not see a difference. Of course that is not exactly the same as running through a ceiling or near other types of wiring with patch panels.

StevenE
 
Granted I did achieve a 6.9 percent increase over 100mb after Cisco and HP had me change some settings, and transferring a single large file.

Glad to hear you got better results with everything configured correctly. Obviously that is key as was pointed out earlier. That said, I'm shocked that you don't consider a 690% (not 6.9%) a satisfactory increase! I assume the 690% was based on the exact test with the ports forced to 100Mb full duplex?

Switches don't add much overhead at all as you indicated, it should give you similar results as the directly connected NICs as you're operating at layer 2. It's when you get into routing at layer 3 that any latency is really introduced and that isn't anything you are experiencing, as I assume you're not setup on different VLANS.

How are you transferring files? Windows drag and drop? This will add significant time to the copy.

As far as home goes I have to 2 HP XW 8200 3.4GHZ Zeon workstations with 4 Sata drives on each with a raid 5 controller. And a Netgear 1gigabit switch ( I took it to work to test ) .

Great setup, but a Netgear 1 Gb switch for $79-$99 cannot be expected to perform consistantly anywhere near the same as a Cisco 6506.

I believe you've probably learned a lot from this experience, so if nothing else, you understand you will never see maximum specs on any device. I'm not sure what your background is on switches, but high end switches aren't expected to perform correctly out of box. There are too many custom settings that require tweaking for your enviornment. Of course, that's why we are gainfully employed in the IT field.

I guess it comes down to the fact that a normal installation, either by a home user or a IT dept in a small company, is not going to have the resources or ability to correctly setup a gigabit network to get the best performance.

Not true at all. Why wouldn't an IT dept have the resources to install it correctly?

Good luck with your further testing and implementation!
 
Back
Top