BSD / ZFS Throughput Issues Resolved

At work, we have a FreeBSD box that is connected to 3 sleds of 8 drives each using an iSCSI HBA adapter. ZFS is just such a good file system, seriously. But that is not what this is about

It all started with a complain from our backup tech. He said that the off-site copies of backups that we take for our clients were all failing. Among the possible causes, the backup vendor suggested that the CIFS share might be overloaded on this target. I mean, CIFS sucks, so yeah, maybe.

Investigating, I found that my throughput writing to the pool over CIFS was about 8Mbps. Gross. Jumping on the ZFS box I ran:

/usr/bin/time -h dd if=/dev/zero of=/mikrobdr/mikrobdr/testfile bs=1024 count=1000000000

to get the write speed of the zpool. 803 Mbps. Acceptable. Reversing it with

/usr/bin/time -h dd if=testfile of=/dev/null bs=1024 count=1000000000

gave me read speeds of 533 Mbps. It’s in a RAIDz2, so meh, it’s fine.

Then I tried rsyncing a file to it to bypass CIFS. 8Mbps again! So it’s not the protocol, it’s the network. I tested the Ethernet cable and it was fine. Confirmed CAT6. Swapped it out anyway and saw the same if not worse performance.

This BSD machine is plugged into an old router and it’s on a different subnet from the production network. My next thought was an auto negotiation issue between the two.

Lo! And behold! Adding a cheap-o gigabit switch between the two resolved the negotiation and now my speeds are ~ 80Mbps. Not great!!! But it’s 10x better and we should be able to solve the current crisis. I think the router is the culprit here. It’s been in play for who knows how long and desperately needs to be put out to pasture.

That’s another blog post.

Leave a Reply

Your email address will not be published. Required fields are marked *