Gigabyte speed - Is it real.

Huggy-

Sorry I missed your message before! You're right on the statement about theoretical speeds..

But!

100 Mbps FULL DUPLEX = 200 Mbps Aggregate speed of Send/Receive. It's 100Mbps THEORETICAL for send and 100Mbps for receive

1000Mbps FULL DUPLEX = 2000 Mbps aggregate speed of send/receive. Only 1000Mbps THEORETICAL for send and 1000Mbps for receive.

This thread has prompted me to purchase a new Linksys EG005W 5-port Gb switch so I can do my own testing at home with the lower end switches. I'll need to buy a WS-5483 GBIC to connect it to my Cisco 3500.
 
Dan-

Actually, I did last night. I changed my mind about using Linksys and went with an SMC 8508T 8-port switch which I ordered for $91.20 from Provantage.com. New Egg is out of stock for the switch, but with a $15.00 rebate there it would be $83.00 there.

New Egg Link

The deciding factor was that this is one of the only "low-end" switches that supports jumbo frames (MTU 9000).

I wanted to keep everything Cisco, but I didn't want to spend $60 per port by buying eight WS-5483 Copper GBICS for my switch. This way I'll only need one for my uplink.
 
Let us know how that switch works out, just as you mentioned before, these low end consumer switches usually don't perform as well as the professional line, but there is no wayI can justify spending a few hundred bucks on a switch.
 
Glad to hear you got better results with everything configured correctly. Obviously that is key as was pointed out earlier. That said, I'm shocked that you don't consider a 690% (not 6.9%) a satisfactory increase!

Yes, I think 690% is a high result, compared to 100mb. But still reasonably short of what most people expect out of a 1 gigabit network. And considering that was with a single large test file, and not the normal files that would be used in the real world it is still a little sad.
Yes, that number is comparing the same exact servers and switches manually set to a 100mb.

To be honest there was any drastic settings changed, the biggest increase in speed was from changing from smaller files to 1 large file. The settings between the switch and settings on the cards did not produce that much of a difference. Maybe 10 percent difference, that may or may not be worth all the effort put in for some people.


Switches don't add much overhead at all as you indicated, it should give you similar results as the directly connected NICs as you're operating at layer 2. It's when you get into routing at layer 3 that any latency is really introduced and that isn't anything you are experiencing, as I assume you're not setup on different VLANS.

For most of our network no, We do have a switch using VLans, but most of our traffic is geographically seperated. The main office is pretty small, just tons of data generated.

How are you transferring files? Windows drag and drop? This will add significant time to the copy. How are you transferring files? Windows drag and drop? This will add significant time to the copy.

The tests where done using windows copy, good ol xcopy/robocopy, and ftp. They where actually pretty close in comparison. FTP being the fastest, windows copy being the slowest. I did use a couple of testing utilities provided by HP and Cisco, but they just confirmed the numbers I had come up with already.

Great setup, but a Netgear 1 Gb switch for $79-$99 cannot be expected to perform consistantly anywhere near the same as a Cisco 6506.

Actually it's a $200+ dollar netgear managed switch, I got it from netgear for a really cheap price. They thought they actually had a chance at pushing Cisco and HP out. :p

I believe you've probably learned a lot from this experience, so if nothing else, you understand you will never see maximum specs on any device.

Actually have been in the field networking since DOS 3.1, with netware lite, and Deskview. I have managed, supported, serviced networks from 5000 users to 2.
I have seen switch manufacturers come and go like bad movies.

Not true at all. Why wouldn't an IT dept have the resources to install it correctly?

Having a wide range of experience myself, I know that smaller companies where the IT dept is usually 1 person maybe 2 who may or may not be dedicated to the task. Usually doesn't have the time or the experience to deal with configuration issues as above. Most smaller companies just want plug and play, for instance my own company. They could not understand why a simple task of setting up new servers and a new gigabit switch then coming up with throughput measurements could take up so much of my time.

Even after the golden rule of any IT dept, "You only get 50% percent of what something is rated at", I just expected more out of gigabit technology.

StevenE
 
Now that I am through with the office experience, I am going to do some testing myself at home.

Besides FTP / and copying, Anyone know of a free utility to test transfer speeds.

StevenE
 
Besides FTP / and copying, Anyone know of a free utility to test transfer speeds.

It's not free, but I just started using Passmark Performance Testing and the network test. Just last night in preparation for receiving my Gb switches, I took baselines on my current performance at home.

In the process, I found a bad cable causing poor throughput on one of my computers, which explains some chopiness that I've had playing back DVD rips across the network to my HTPC. All machines now perform between 91 Mb and 93 Mb.

When I receive my new SMC switch, I'll need to tweak the network cards to enable Jumbo frames, but I'll be able to get baselines on before and after for each setting change.

Passmark Product Info

I'm sure there are others that are free.
 
Before I get into my findings with the low end Gb switch I purchased, I just thought I'd thank Deranged for bringing up the topic. It gave me an excuse to study the topic more in depth. Although many of us mentioned bottlenecks in other components in the computer, I guess we weren't specific enough in what caused them..

I think this post will provide insight on WHY the performance was poor, especially if you try and burn a CD across the network.

I received my SMC 8508T Gb switch, which is currently the only low end switch (under $100, or even $200) that has the jumbo frames feature (MTU of 9000 bytes). I hooked it up, and attached two Dell servers (400 SC) with built in Intel 1000 MTs.

As Intel's drivers are updated very frequently and do impact performance, I downloaded the latest NIC driver for them and proceeded with the testing. I began with a performance test with just the default settings of the NIC using Passmark. My results were an average of ~315Mbps. Nothing too impressive and very similar to Deranged results.

performance.jpg


The next test, I went into the adapter settings and set the Jumbo frame to 9014 (default is disabled, which means MTU is standard ethernet 1500). My test results jumped quite a bit to an average of 406Mbps.

performance2.jpg


I tweaked a bit more and didn't get much above 500Mb, and it wasn't sustained. Then I started looking into why. Low and behold it hit me. The PCI bus that your built-in NIC or add-on card connects to is only a 32-bit 33 Mhz bus, which has a maximum transfer rate of..... 133 MBps. Now, you say so what? 1 Gb Ethernet translates to only 125 MBps, so there is plenty of speed! Well, not only is the 133 MBps the maximum speed of the bus which cannot be sustained, it's SHARED BETWEEN ALL PCI DEVICES.

Therefore, if you are using any other PCI devices your performance is going to be severely impacted. That means if you're trying to RIP a DVD across the network on a PCI 33 Mhz bus, forget about it. So just make sure you disconnect your HD, DVD and everything else before you test it. ;)

The 64-bit 66 Mhz NIC cards are much faster, but still aren't as close to 1 Gb as you'd like to see. Our performance results have been excellent with Cisco switches at work, however we've only hooked the Gb links between switches. When you hit a server, you need to be very sure that your bus speeds of the NIC are fine. Most likely they are if you have all high-end servers. Unfortunately, even Deranged's HP workstations have a 32-bit 33mhz bus for the NIC, but have PCI-X slots that a 64-bit 66mhz card could be added.

If you're in the market for a switch, buy the SMC, unless another one comes out that supports jumbo frames.. I almost bought the linksys, because I'm partial to them, but I'm glad I did the research before I pulled the trigger.

Here are some excellent links I found after I was able to target my googling more. They explain it much better than I could.

Link 1 - Gb Card Roundup

Link 2 - Gb Card Roundup 2

List of switches supporting Jumbo frames and those that do not

For the IT folks who own Cisco gear and want to configure it properly for jumbo frames

Sorry to beat this topic to death, but I learned a lot from it myself, and hope others post similar items!

Is GigaBIT for Real? Yep, it sure is. Are PCs ready for it? Nope, not by a longshot.
 
Hey Stinger,

Thanks for the detailed testing, it confirms exactly what I was thinking. As for the gigabit speeds, I am sure you can see those easily in a switched environment, when multiple servers are copying/moving data around.
 
Stinger,

I did a little testing last night on my home machines. I started in the 300Mbs range. After a little more playing I changed the Tcpwindow size to 256k and the speed jumped up to 640Mbs. ( using ipperf.exe, great little utility. ) there didn't seem to be much of a difference from 96k tcp windows size and up except intel had recommended 256k when I was speaking to them.

I don't think the netgear switch supports jumbo packects, couldn't find any info.

I found for some strange reason, my disk access between the computers is very slow, about 14Mbs. Even though each machine has 4 Sata drives in a raid 5 with and adaptec 1420sa raid controller.

Hmmmmm another windows xp quirk ?

I didn't find any answers on that yet.

StevenE
 
Deranged-

I played with a bunch of TCP settings, but I only up'd my window to 131400 based on what I found here:

Link

Was it a consistant 600Mb? It was with the NIC's connected together, right? I'd love to settle around there with the SMC switch. I think your disk access is slow because of the shared bus. Verify write caching is enabled on the drive (ignore warning about power loss causing corruption, you want it for use in file servers, and they should have a UPS anyway, right?).

You're right on the netgear, it doesn't support jumbo packets.

I'm off to test some TCP window settings! Thanks for the tip.
 
Stinger,

I played with a few tcp/ip settings myself, with little or no change. Except for the tcp window size.

I mixed up my results, At Home, I only got 440Mbs. That was it, no matter what I tried It would not go above that.

I do have 1 question regarding Jumbo packets, that I do not seem to be able to find an answer for.

If you setup a server with Jumbo packets, which increases speed with other servers set the same ( and switch ), How do 100meg clients accessing that server handle it ?

Where does the break down of packets occur?, Server Level, Switch level, Client level ?

I just want to understand the impact this is going to have.

Thanks StevenE
 
Back
Top