ZFS pool with SSD ZIL (log) device shared with NFS – Performance problems!

Some times ago I bought a STEC ZeusIOPS SSD with 18GB capacity. This disks comes out from a Sun ZFS Storage 7420 system. But it’s 3.5″ large and without a server which supports 3.5″ large SAS disk drives I couldn’t test the SSD. Today I was able to test the drive on a Fujitsu Primergy RX300 S5 server. I installed five 500 GB large SATA drives and my STEC ZeusIOPS SSD. The first disk contains an OpenIndiana installation, the rpool. The remaing four SATA drives are grouped as a ZFS RAIDZ2 pool. I exported a ZFS dataset over NFS and 1 GbE to a VMware ESX system. With a Ubuntu linux virtual machine I run several benchmarks.

The results without the SSD are 75-80 MBytes/s write (850ms latency),Β  between 40 and 65 MBytes/s rewrite and 120 MBytes/s read performance. I did different runs with bonnie++ and iozone and achieved always similar values. During the tools did their benchmarks I watched the IO with „zfs iostat“. The write and rewrite results matched the numbers above. Reading lot’s of data from disk was not necessarry due a large enough ARC mem cache. That’s why the iostat output values was lower than 10 Mybtes/s.

Then I added the STEC SSD as log device to the ZFS pool and rerun all the tests. But I couldn’t believe the values!!! My benchmarks finished with only 45-50 MBytes/s write and 35-45 MBytes/s rewrite. Read performance didn’t changed, of course. The write latency exceeded 10000ms!!! Something went wrong but I don’t know what. I did the runs again and I watched the zfs iostat output parallel. But the output of zfs iostat throwed values always above 100 Mbytes/s. Sometimes I reached even values above 170 MBytes/s, but always more than 100 Mbytes/s. This is the maximum rate for a single 1 GbE connection! But the benchmark output was very different. They didn’t reached the results of the benchmark without the SSD. I was confused. I disabled the log device with the logbias option and set it to throughput. The benchmark result and the iostat results went back to 75-80 Mbytes/s write. I reenabled the log device with logbias=latency and I had again the benchmark result of max. 50 Mybtes/s write and big latency values but with an iostat ouput always over 100 Mbytes/s!

Something is wrong, but I don’t know what. πŸ™ Do you have an idea?

New job, dead lab and a temporary solution. ;)

Hi !

Let’s start blogging again. πŸ™‚ A lots of time passed since I blogged interesting things. The main reason was, I decided to change my employer. The last weeks we moved our home, too. Now my little family is living in a suite little house, with a big garden, a big garage but no more big cellar. πŸ˜‰ But I will talk about this circumstance some lines later.

qskills

Today I had my first day at qSkills. This is my new employer. πŸ™‚ I’m working as an IT Architect and Software-Engineer. My first job is to design and implement an entire new network because the current network is very outdated. Maybe the new network is based on Cisco Nexus. πŸ˜‰ It’s hot stuff and my new boss loves nice and cool toys. Me too! πŸ˜‰ BTW. the company earns his money with professionel trainings and they have a strong relationship with Fujitsu and NetApp. In the dedicated training datacenter (nine 47U racks!) are lot’s of nice toys installed, predominantly Fujitsu servers and many NetApps storage systems of course. A very nice playground for lot’s of tests in addition to my private lab. πŸ™‚ A good point to talk about my home lab.

My well known home data center is DEAD ! I moved everyting into the new garage, until I can start my new home lab idea. πŸ˜‰ I’m not sure if I can pay the new idea this year, because the move to the new home was very expensive and I have a wife and little daughter who needs money too. But I can wait until next year. Meanwhile I’m running a tempory solution in the new cellar which is not big enough for a 2m tall rack. But it’s enough for playing and blogging about my experience, as you can see on the last picture. πŸ˜‰ That’s why I hope you visit my blog, again…. πŸ˜€

Some news…

It’s end of February and I didn’t blogged since two months. πŸ™ ButΒ  the last months I had lot’s of work and prepared very important changes. I quit my current job and on 1. April 2012 I will start working in a new company. I will blog about my new employer soon. It’s an amazing company and I can’t wait starting my new job. πŸ™‚ Last weekend I signed my new tenancy agreement and my new home is amazing, too. But there’s no more room for my home lab. That’s why I have to stop for several months playing arround with my toys. Don’t worry, I have a great idea for my new „Home Datacenter“, but I need time and of course money. πŸ˜‰ Stay tuned and thank you for visiting my Blog. πŸ™‚

Merry Xmas 2011

I wish all my readers and followers on Twitter a

Merry Xmas!

Thank you for reading and commenting all my blogs and tweets. πŸ™‚ Now it’s time to shutdown my home lab and spent attention to my family. πŸ™‚ My daughter Lina is one year old and 30 mins ago I give her a very big Teddy-Bear. πŸ™‚ That was really fun and she loves her new toy.

In 2012 I will finish my M$ Hyper-V project and I plan to publish a real world Hyper-V cluster configuration. I will start with the current Hyper-V version and I plan to do the same with Hyper-V 3.0 ! πŸ˜‰ A Windows SMB 2.2 file server over Infinband is on my roadmap, too. And with a little bit of luck I can get an HP EVA 4400 for my home lab. πŸ˜‰

Stay tuned and I wish everybodyΒ  happy holidays! πŸ™‚

Kind regards.
Tschokko

Merry Xmas 2011
Merry Xmas 2011

HP ProCurve Switch Mesh – My new lab network

I decided to try the HP ProCurve Switch Mesh technology in conjunction with my HP BladeSystems and HP Virtual Connect. After a week of planing, searching the Internet and reading several documents, I started to rebuild the network in my home lab. The results are amazing. Everything in my lab runs very fast and with low latency. Even the software based Core Router (Debian + Quagga) is no limitation. I tested the network performance with iperf and was able to send data thru my Mesh from one routed network to a another with 900 Mbits/s! I did the iperf test with two CentOS 6 virtual machine running one VMXNET3 NIC. The VMs are placed on different hosts, too.

Configuring the switch mesh is very simple. Disable routing and stacking and add all mesh ports with the mesh command. The vlans are added automatically to all mesh ports. Only ensure that every mesh switch knows all configured vlans.

My HP BladeSystem is connected with two LACP trunks to the switch mesh. I decided to setup VLAN tunnelingΒ  because the two connected blades running VMware ESX.The HP Virtual Connect setup was very simple and thanks to LLDP I detected a cabling error. πŸ˜‰ LLDP is very useful! You can see several informations about the connected network port. That’s really cool.

Currently my mesh is connected with several 1 Gbits links, but with a little bit luck I can get some 10 GbE modules for my 3500yl switches. πŸ˜‰

That’s all for today. Stay tuned. πŸ˜‰

IBM Installation Manager on RHEL 6 / CentOS 6 x86_64

To solve the error

# ./install
-bash: ./install: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

and the font display error when IBM Installation Manager is started, you need to install following packages with yum.

yum install gtk2.i686 libXtst.i686 dejavu-sans-fonts

Solaris 11 manual IPv4 & IPv6 configuration

Wooow… lot’s of changes ! Today I downloaded the brand new Oracle Solaris 11 operating system and started to install it into a VirtualBox virtual machine. Automatic network configuration is a very nice feature, but I’m a „Old-School“ guy and I prefer the manual configuration. πŸ˜‰ So I tried to setup a valid network configuration for IPv4 and IPv6. I’m running several months a dual-stack configuration at home and I’m very impressed of IPv6. That’s why a proper IPv6 configuration is very important for me, because I access all my systems over IPv6 if it’s available.

Okay, no guarantee for all following steps. But my Solaris 11 installation seems to run well with this configuration. If I did some errors, please comment. Solaris 11 has lot’s of changes!

Disable automatic network configuration:

# netadm enable -p ncp DefaultFixed

Configure a static IPv4 address and default route:

# ipadm create-ip net0
# ipadm create-addr -T static -a 10.0.2.18/24 net0/v4static
# route -p add default 192.168.100.1

Setup name services and a valid domain name:

# svccfg
svc:> select name-service/switch
svc:/system/name-service/switch> setprop config/host = astring: "files dns"
svc:/system/name-service/switch> setprop config/ipnodes = astring: "files dns"
svc:/system/name-service/switch> select name-service/switch:default
svc:/system/name-service/switch:default> refresh
svc:/system/name-service/switch:default> validate

# svccfg
svc:> select nis/domain
svc:/network/nis/domain> setprop config/domainname = "itdg.nbg"
svc:/network/nis/domain> select nis/domain:default
svc:/network/nis/domain:default> refresh
svc:/network/nis/domain:default> validate

# svccfg
svc:> select dns/client
svc:/network/dns/client> setprop config/nameserver=net_address: ( 2001:4dd0:fd4e:ff01::1 2001:4dd0:fd4e:ff02::1 )
svc:/network/dns/client> select dns/client:default
svc:/network/dns/client:default> refresh
svc:/network/dns/client:default> validate
svc:/network/dns/client:default> exit

# svcadm enable dns/client

Please note, that I configured IPv6 name server addresses! This is only possible if your DNS server has a valid IPv6 configuration.

Let’s add the important IPv6 part:

# ipadm create-addr -T addrconf net0/v6
# ipadm create-addr -T static -a 2001:4dd0:fd4e:d00f::a007 net0/v6add

The first line is needed because I don’t want to configure an IPv6 default route! This is done with my Router Advertisement daemon and Link-Local addresses.

That’s it ! My Solaris 11 installation is available thru IPv4 and IPv6. πŸ™‚

HP BladeSystem C3000

Say Hello to Shorty ! πŸ™‚

Shorty is running an Ubuntu OpenStack Cloud Computing environment. It is connected to a MSA2012fc storage system and a ProCurve 5406zl modular network switch. The 10 GbE uplink to the 10 GbE 4-port module is prepared, but the CX4 X2 module is missing to complete the connection. I will order the X2 module within the next weeks. πŸ˜‰

One step towards Converged Infrastructure

I know my blog is very silent since several weeks. But I have lot’s of work and of course a daughter.;)

As you can imagine I have some new ideas for my home lab. πŸ˜‰ And one is very exciting ! I started to rethink and rebuild my HP blade environment. Yesterday a new toy arrived. Have a look at the backside of my HP c-Class BladeSystem. You can see a very important interconnect module for a Converged Infrastructure. πŸ˜‰

hpbs_interconnects

As posted below I planned to build a network upon HP ProCurve. But I noticed that I need a premium license for features like VRRP and OSPF. But these two features are neccessary for my new Home-Lab network infrastructure. I like to create a enterprise network for my systems and not a simple 192.168.x.x home network. That’s why I switched to big Cisco 6506 modular switches. Cisco isn’t cheap, but it offers all features I need. Buying 3 HP ProCurve licenses is impossible, I don’t earn so much money. Let’s say hello to my first Cisco 6506 switch. The second 6506 is below and I will receive a Sup2 engine with MSFC2 the next weeks. πŸ˜‰

cisco6506

I hope you followers haven’t forget my blog. Please stay tuned! The next months will be very exciting. I’m saving money for a HP MDS 600. πŸ˜‰

Kind regards
Tschokko