Welcome Cisco UCS – Teil 1

Nicht ohne Grund habe ich mich entschlossen mein HP BladeSystem aufzugeben. Begrüßt ein neues Mitglied in meinem Home-Lab, ein Cisco UCS B-Series Blade System!

Cisco UCS 5108 Chassis

Der Abschied von HP fiel mir relativ leicht, nachdem ich das Cisco 5108 Chassis in der Bucht entdeckt hatte. Schon lange war ich mit dem bereits abgekündigten HP Virtual Connect 1/10Gb Ethernet Modul unglücklich, denn es bot nur eingeschränkte Netzwerkkonnektivität. Auf sechs Module aufzurüsten hätte alleine schon über 200 Euro für die HP CX4 Kabel verschlungen. Dann hätte ich zwar mit Quad-Port Karten bis 6x 1 Gbit/s pro Blade gehabt, aber eben zu welchem Preis. Flex-10 dagegen hält sich seit Jahren hartnäckig im Preis – im Schnitt pro Modul 2000 Euro – und erschwerend kommt HPs neue Firmware / Support Politik dazu. Wie ich auch immer: Lange Rede, kurzer Sinn, ich bin froh den Schritt zu Cisco gewagt zu haben. Alle genannten Einschränkungen sind dank der UCS Technologie und den Cisco VIC Karten nicht mehr vorhanden!

Nun, was seht ihr auf dem Bild? Aktuell ein Cisco 5108 Chassis, mit einem installierten Cisco B200 M1 Blade. Das habe ich mir nackt mit liefern lassen und werde die Tage eine Intel X5570 CPU und 48GB RAM installieren. Beides liegt hier schon bereit. Auf der nicht abgebildeten Rückseite sind  8 Lüfter vorhanden und zwei FEX Module vom Typ Cisco 2104XP. Im wesentlichen also ein Chassis der ersten Generation, aber immer noch sau cool.

Das Herzstück, die Fabric Interconnect Switche, fehlen allerdings noch. Im Visier habe ich die 6120XP Switche. Erstmal will ich nur mit einem starten, um mich dann mit der Cisco UCS Welt vertraut zu machen. Ich denke während des Jahreswechsel sollte es so weit sein.

Was erwartet euch?
Ein deutschsprachiger Blog zum Thema Cisco UCS.  Infos dazu gibt es zwar wirklich zuhauf, aber eben nicht so viel in deutscher Sprache.  In diesem Sinne, dran bleiben und bis bald.

Erste Veränderungen. Umbau auf Deutsch und neue Optik!

Wie ich bereits angekündigt habe, steige ich wieder auf die Sprache Deutsch um. Meine bisherigen Artikel bleiben natürlich weiterhin bestehen, da ich recht viele Treffer bei Google habe.

Um meinen Umstieg auch optisch sichtbar zu machen, habe ich das Theme gewechselt. Nichts besonderes, aber genug um die Veränderung der Seite hervorzuheben.

Dieser Artikel passt wirklich hervorragend in de Kategorie Blabla. 🙂 Er ist absolut nicht lesenswert. *lach* Aber meine bestehenden Kategorien zu überdenken, dürfte ebenfalls nicht schaden.

In diesem Sinne, freut euch darauf was bald kommt. 😉 Langweilige Versprechungen gibt’s bei mir jedenfalls nicht. *ggg*

Sold my HP C7000 BladeSystem !

Good bye my amazing HP C7000 BladeSystem. I enjoyed you so many years, I installed many different servers into you and I run you thousands of hours. Thank you !!! I hope you like your new home.

After five years of HP experience in my Home-Lab, the HP era ends today.  Maybe it’s a good time to change my entire blog, too. My last post is one year old and lot’s of visitors forgot my site. No wonder !

One thing I like to change is the language, again. My english is not very good and I have so many problems to explain and write down all my ideas. I’m sure that there are many other crazy people out there, who can help and inspire you with blogs in your preferred language. My wish is to change back to German. Please accept this decision.

If you see interesting images or snippets, but you don’t understand German. Feel free and write an email -> tschokko@gmx.de

If you think my Home-Lab is dead, you’re wrong ! 🙂 I love blades and on January 2015 I will present you my newest toy. 😉 Meanwhile have a good year, I’m sure you have enough work and visit my blog again. 🙂

Thanks and kind regards

Tschokko

hp_c7000_1hp_c7000_2

I’m back again !

More than one year passed since my last blog post. I’m really sorry about that, but my life has changed a little bit. My 3 years old daughter is a very time consuming little girl and she needs my attention. Playing with my home lab all the time is not possible. On the other hand I have a very money consuming car. The last 12 months I had so many expensive repairs, that no money remained for my home lab.

The good news are, I’m back and want to spent time again into my blog and home lab. The bad news are, that running some Unix servers with attached storage are no longer interesting for me.  😉 Thats why I’m not sure what kind of content I can offer you in the future. But it doesn’t matter now because I’m rebuilding my entire home lab currently. 🙂 Let’s talk a little bit about the rebuild.

As mentioned before I couldn’t spent money into my home lab the last months. The following picture shows my old and well-known setup.

home_lab_1

But last Sunday a dream came true. I bought new racks for all my stuff. 🙂

home_lab_2

These are 42 units tall Rittal TS8 racks with 80 cm (32 in.) width and 100 cm (40 in.) depth. Enough space to hold lot’s of nice hardware. Yesterday I leveled and connected the racks together and it looks awesome. What a crazy home lab !

home_lab_3

Late at night I finished mounting my NetApp FAS3020 storage system. The very first piece of hardware in the new racks. 😉

home_lab_4

I love the new racks and the new look of my home lab. Can’t wait until all my hardware is properly installed.

What’s next ? Well, I need several rack mount kits to install all the hardware, but they are expensive especially the Brocade kit for the SAN switches. Providing power is another costly intent, but a very nice subject  for blogging. Least but not last I have some cool networking ideas. Enough material to reanimate my blog. 🙂

I hope you are visiting my homepage in the future again. 🙂 Thanks a lot and best regards… Tschokko

Need help! Simple Multi-Site Datacenter Design

Today I need help. 🙂

Currently I’m planing a Multi-Site Datacenter at home. 😉 I want to test some new technologies like Cisco VXLAN, Site-to-Site replication, VMotion over WAN, and and and. But I’m not a professionell networking guy and that’s why I’m not sure if my following network design is comparable with a real-world datacenter network. I don’t need any redundant component and network designs like a 2 or 3-tier network architecture (Core-Distribution-Access). My lab should be simple, but not too simple. 😉 Feel free to comment…

simple_multi_site_dc

ZFS pool with SSD ZIL (log) device shared with NFS – Performance problems!

Some times ago I bought a STEC ZeusIOPS SSD with 18GB capacity. This disks comes out from a Sun ZFS Storage 7420 system. But it’s 3.5″ large and without a server which supports 3.5″ large SAS disk drives I couldn’t test the SSD. Today I was able to test the drive on a Fujitsu Primergy RX300 S5 server. I installed five 500 GB large SATA drives and my STEC ZeusIOPS SSD. The first disk contains an OpenIndiana installation, the rpool. The remaing four SATA drives are grouped as a ZFS RAIDZ2 pool. I exported a ZFS dataset over NFS and 1 GbE to a VMware ESX system. With a Ubuntu linux virtual machine I run several benchmarks.

The results without the SSD are 75-80 MBytes/s write (850ms latency),  between 40 and 65 MBytes/s rewrite and 120 MBytes/s read performance. I did different runs with bonnie++ and iozone and achieved always similar values. During the tools did their benchmarks I watched the IO with „zfs iostat“. The write and rewrite results matched the numbers above. Reading lot’s of data from disk was not necessarry due a large enough ARC mem cache. That’s why the iostat output values was lower than 10 Mybtes/s.

Then I added the STEC SSD as log device to the ZFS pool and rerun all the tests. But I couldn’t believe the values!!! My benchmarks finished with only 45-50 MBytes/s write and 35-45 MBytes/s rewrite. Read performance didn’t changed, of course. The write latency exceeded 10000ms!!! Something went wrong but I don’t know what. I did the runs again and I watched the zfs iostat output parallel. But the output of zfs iostat throwed values always above 100 Mbytes/s. Sometimes I reached even values above 170 MBytes/s, but always more than 100 Mbytes/s. This is the maximum rate for a single 1 GbE connection! But the benchmark output was very different. They didn’t reached the results of the benchmark without the SSD. I was confused. I disabled the log device with the logbias option and set it to throughput. The benchmark result and the iostat results went back to 75-80 Mbytes/s write. I reenabled the log device with logbias=latency and I had again the benchmark result of max. 50 Mybtes/s write and big latency values but with an iostat ouput always over 100 Mbytes/s!

Something is wrong, but I don’t know what. 🙁 Do you have an idea?

New job, dead lab and a temporary solution. ;)

Hi !

Let’s start blogging again. 🙂 A lots of time passed since I blogged interesting things. The main reason was, I decided to change my employer. The last weeks we moved our home, too. Now my little family is living in a suite little house, with a big garden, a big garage but no more big cellar. 😉 But I will talk about this circumstance some lines later.

qskills

Today I had my first day at qSkills. This is my new employer. 🙂 I’m working as an IT Architect and Software-Engineer. My first job is to design and implement an entire new network because the current network is very outdated. Maybe the new network is based on Cisco Nexus. 😉 It’s hot stuff and my new boss loves nice and cool toys. Me too! 😉 BTW. the company earns his money with professionel trainings and they have a strong relationship with Fujitsu and NetApp. In the dedicated training datacenter (nine 47U racks!) are lot’s of nice toys installed, predominantly Fujitsu servers and many NetApps storage systems of course. A very nice playground for lot’s of tests in addition to my private lab. 🙂 A good point to talk about my home lab.

My well known home data center is DEAD ! I moved everyting into the new garage, until I can start my new home lab idea. 😉 I’m not sure if I can pay the new idea this year, because the move to the new home was very expensive and I have a wife and little daughter who needs money too. But I can wait until next year. Meanwhile I’m running a tempory solution in the new cellar which is not big enough for a 2m tall rack. But it’s enough for playing and blogging about my experience, as you can see on the last picture. 😉 That’s why I hope you visit my blog, again…. 😀

Some news…

It’s end of February and I didn’t blogged since two months. 🙁 But  the last months I had lot’s of work and prepared very important changes. I quit my current job and on 1. April 2012 I will start working in a new company. I will blog about my new employer soon. It’s an amazing company and I can’t wait starting my new job. 🙂 Last weekend I signed my new tenancy agreement and my new home is amazing, too. But there’s no more room for my home lab. That’s why I have to stop for several months playing arround with my toys. Don’t worry, I have a great idea for my new „Home Datacenter“, but I need time and of course money. 😉 Stay tuned and thank you for visiting my Blog. 🙂

Merry Xmas 2011

I wish all my readers and followers on Twitter a

Merry Xmas!

Thank you for reading and commenting all my blogs and tweets. 🙂 Now it’s time to shutdown my home lab and spent attention to my family. 🙂 My daughter Lina is one year old and 30 mins ago I give her a very big Teddy-Bear. 🙂 That was really fun and she loves her new toy.

In 2012 I will finish my M$ Hyper-V project and I plan to publish a real world Hyper-V cluster configuration. I will start with the current Hyper-V version and I plan to do the same with Hyper-V 3.0 ! 😉 A Windows SMB 2.2 file server over Infinband is on my roadmap, too. And with a little bit of luck I can get an HP EVA 4400 for my home lab. 😉

Stay tuned and I wish everybody  happy holidays! 🙂

Kind regards.
Tschokko

Merry Xmas 2011
Merry Xmas 2011

HP ProCurve Switch Mesh – My new lab network

I decided to try the HP ProCurve Switch Mesh technology in conjunction with my HP BladeSystems and HP Virtual Connect. After a week of planing, searching the Internet and reading several documents, I started to rebuild the network in my home lab. The results are amazing. Everything in my lab runs very fast and with low latency. Even the software based Core Router (Debian + Quagga) is no limitation. I tested the network performance with iperf and was able to send data thru my Mesh from one routed network to a another with 900 Mbits/s! I did the iperf test with two CentOS 6 virtual machine running one VMXNET3 NIC. The VMs are placed on different hosts, too.

Configuring the switch mesh is very simple. Disable routing and stacking and add all mesh ports with the mesh command. The vlans are added automatically to all mesh ports. Only ensure that every mesh switch knows all configured vlans.

My HP BladeSystem is connected with two LACP trunks to the switch mesh. I decided to setup VLAN tunneling  because the two connected blades running VMware ESX.The HP Virtual Connect setup was very simple and thanks to LLDP I detected a cabling error. 😉 LLDP is very useful! You can see several informations about the connected network port. That’s really cool.

Currently my mesh is connected with several 1 Gbits links, but with a little bit luck I can get some 10 GbE modules for my 3500yl switches. 😉

That’s all for today. Stay tuned. 😉