Jump to content
RESET Forums (homeservershow.com)

New Gen 8 build. Am I missing anything?


demomanca
 Share

Recommended Posts

Hi all.

 

After many years lurking, it's finally time to post again!

 

I'm replacing my ageing EX490 (upgraded to E7500 Core 2 Duo, running Ubuntu Server 14.04) to a shiny new Gen 8 Micro.

 

I've ordered:

Gen 8 1610T Model

2 x 8gb Kingston KVR16LE11/8l

4 x 3TB WD Reds (WD30EFRX)

120gb Samsung 850 Pro SSD

HP PS1810-8G switch

Belkin F6U600AU UPS (cheap crap, but this isn't what I would call a mission critical server)

Faceplate kit (call me a sucker, but the blue front looks nice)

 

While I'm waiting for the order to arrive, I've been debating about OS and drive setup. I'm relatively happy with the performance of my current server (can transcode 2 streams of plex without any hiccups) and I'm familiar with most of the maintenance aspects of Ubuntu (or more specifically, I'm well skilled in googling what I need to do)

 

The data being stored is Movies and TV shows (Sickbeard, SABnzbd and Plex), a timemachine backup of our main iMac machine, which has all our photos and documents etc on it. Given the fact that if I lose all the movies and TV shows, I can re-download these, and the timemachine is already a backup, I'm not too stressed about over-redundancy.

 

My strategy is:

Ubuntu 14.04 on the SSD

RaidZ1 Array accross the 4 WD's

Link Aggregation to the router from the 2 GB NICs

the remaining ports on the router will hardwire in a XBox one (Plex), PS3 (Plex) and iMac (time machine) (these are currently wireless G/N/AC as available on the devices)

 

I want to stick to Ubuntu rather than FreeNAS (which I understand will cater for all of the above) as sometimes I dabble in a bit of android ROM building, and having an Ubuntu machine is super handy for that. I'm comfortable with the setup of this (mileage may vary of course), but I want to clarify a few things with the people that have the Gen 8:

 

1. From what I can tell, the "fan runs faster" issue when using ACHI on all the drives has been solved with the latest BIOS updated. Is this true?f

2. In mounting the SSD in the optical bay, I've purchased this Micro Sata to Sata adapter to make the connection neater, however it looks like the model that ships without the ODD also ships without that cable/plug. Is this the case? If so, I have a spare sata and molex I can use, so it's not a total failure.

3. Is there any real issues in running link aggregation?

4. Is there a better option that Raidz1. I think I'd prefer it to straight up software Raid5, or even the B120i's Raid10 (which I think is overkill for my redundancy requirements).

 

Thanks, and on hehalf of all the lurkers, thanks for all the information that's on these forums!

 

 

Link to comment
Share on other sites

Ohhh mee! I run G8 Microserver, ZFS on Linux 0.6.3 + Debian Wheezy (no gui).

 
demomanca, on 29 Dec 2014 - 02:45 AM, said:
I want to stick to Ubuntu rather than FreeNAS (which I understand will cater for all of the above) as sometimes I dabble in a bit of android ROM building, and having an Ubuntu machine is super handy for that. I'm comfortable with the setup of this (mileage may vary of course), but I want to clarify a few things with the people that have the Gen 8:
 
1. From what I can tell, the "fan runs faster" issue when using ACHI on all the drives has been solved with the latest BIOS updated. Is this true?
 
+1 I would like to know the answer to this as well - not tried yet (plus I think I now need to buy support to get the FW updates on my lab machine)
 

demomanca, on 29 Dec 2014 - 02:45 AM, said:
3. Is there any real issues in running link aggregation?
 
Not at all - its very easy.
 
Use balance-alb with ifenslave 2.6 and you won't need special switch support.
 
Apologies if you already know, if not this mode uses ARP negotiation and works very well.
 
That said the PS1810-8G supports other modes and also has properly implemented asymmetric flow control, which you might also want to enable if you're doing HEAVY video streaming.
 
Here is an example...
auto lo
iface lo inet loopback


#eth0 config
auto eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0


#eth1 config
auto eth1
iface eth1 inet manual
bond-master bond0


# bond0
auto bond0
iface bond0 inet static
address x.x.x.x
gateway x.x.x.x
netmask x.x.x.x
dns-nameservers x.x.x.x x.x.x.x
dns-search xxxxxxx
bond-mode balance-alb
bond-miimon 100
bond-slaves none
NB: On Ubuntu you might have interfaces called em0 and em1.
 

demomanca, on 29 Dec 2014 - 02:45 AM, said:
4. Is there a better option that Raidz1. I think I'd prefer it to straight up software Raid5, or even the B120i's Raid10 (which I think is overkill for my redundancy requirements).
 
Personally I would say stick with ZFS on Linux - but I am the converted - haha.
 
Although AHCI = a little more fan noise, ZFS really wants to manage those drives directly.
 
Actually my lab machine has the same setup as you, but the G2020 chip. The performance is brilliant!
 
16GB ECC will be ideal, just remember to define ZFS_ARC_MAX otherwise ZFS on Linux won't use more than 50% of your memory by default.
 
FWIW, I left a demo box with an indie post house in London (late Dec). Its running an open source video storage collaboration software that I co-develop. These guys are pulling multiple high bandwidth video streams from the unit, without issues (DNxHD185/220).
 
Its also worth mentioning this have it sat in the suite with them - so the fan noise issue is seemingly dependant on your tolerance to this kind of thing - although I admit I wouldn't want it in my living/bed room. Totally agree. Most importantly its far easier to cool this setup, than a P222.
 
BTW you might want to consider swapping to 3 x 4TB Red instead because 1) this leaves you a hotswap for your boot drive 2) ZFS will utilise the space a little better on a Z1 over 3 drives - its just a matter of how the numbers work.
 
Keep us up to date with your progress. I’ll be checking back :-)
Edited by scruffters
Link to comment
Share on other sites

I'm hoping the AHCI thing is fixed, as the server is going to sit under my TV.

 

I was going to go with 3 drives, but it looks like doing any sort of storage expansion isn't on the cards with zfs raid. With this config, with 4 drives already in the pool, I figured this will allow me down the track to swap out each 3TB with a larger drive (those 8TB Seagate drives sound nice) and be able to have the maximum storage available to me, whereas if I leave 1 spot free, I can never expand into that spot.

 

I'll post pics etc once it arrives.

Edited by demomanca
Link to comment
Share on other sites

It'll be too loud to sit in the living room...

 

The lab box sits in my basement. Every time I've move it to the desk in the 'man room' it ends up back downstairs within a fortnight. At first its not so bad, but sooner or later the mrs makes me sleep in the spare room and I remember why I put it in the basement. haha.

 

RE expand in ZFS, I agree that can be a limitation.

 

That said you should still be able to swap the drives out for larger units in the future - but it requires a resilver for each replacement, so probably not ideal...

Edited by scruffters
Link to comment
Share on other sites

The main fan runs OK in AHCI with the 6/6/14 J06 BIOS/firmware, but the PSU fan is too loud for a living room situation.  It's too loud for any domestic situation, unless you can stuff the machine in a cupboard or something.  I had my Gen8 sitting in the corner of the spare room, and I could hear it on the other side of the house.  40mm fan is crap.

 

The Lenovo TS140, though, is perfectly quiet enough for a living room, it's quieter than my HTPC...

Link to comment
Share on other sites

  • 2 weeks later...

So it's all built and running!

 

Ubuntu 14.10

OS on BTRFS on the Samsung 850, setup as Raid1 mirror on it's own. BTRFS detects it's an SSD fine, and temp monitoring works

Data dump on a RaidZ1 accross the 4 WD's

Both network cards bonded (802.3ad bonding)

Sickbeard, Sabnzbd, qbittorrent, webmin and Time Machine backups all up and running.

 

I few things of note, and I won't explain in too much detail, but if future builders come here they might like to read this;

 

Ubuntu 14.10 detected everything fine. Video, raid controller etc. No issues during install

Network bonding using what scruffers said above didn't work for me. Not sure if it was me only, but I had to set both interfaces as bond slaves, and list them under the bond0 config as slaves (not none as per above)

Also, the network config manager in Gnome punched its way into my network config, and was trying to configure the NICs as individuals as well. Shouldn't be an issue if gnome isn't installed, but it was for me. I had to tell it to "unconfigure" both interfaces

Configuring btrfs as the boot drive was simple enough (done in setup, Ubuntu supports it nativly)

Configuring ZFS was SUPER easy, compared to other ways I've done it in the past. Very happy.

Transmission is a nightmare to configure, went with qbittorrent instead

Netatalk (and subsiquently Time Machine backups) was also a nightmare to configure. It appears a lot of guides on the internet for both these items are out of date.

 

Temps at the moment according to iLO and Webmin are good (I live in a hot climate, so my average tems are always higher than normal, but averaging 30-35C for most temps.

 

Transfer speeds - pulling data from my old HP EX490 averaged 70-80mbps, which I'm happy with, given the EX490 was from a jbod array and 1Gbps connection.

ZFS scrub of 4Tb of data took approx 5 1/2 hours

 

Some final thoughts to anyone considering one of these boxes

iLO is brilliant. Remote console is great (effectivly a VNC connection straight to the hardware, you can manage the bios over a remote link!), and I doubt you'll ever get something like this on a BYO box.

Hardware is super solidly built.

Noise for me is fine, and fans sit at 10-16%.

  • Like 1
Link to comment
Share on other sites

Network bonding using what scruffers said above didn't work for me. Not sure if it was me only, but I had to set both interfaces as bond slaves, and list them under the bond0 config as slaves (not none as per above)

You may have needed a restart to get the bond up and running - also remember that 14.04 uses emX interface names... so your interfaces will probably be em0 and em1.

 

Did you remember to install ifenslave and was it version 2.6?

 

I've not seen this fail many times, only when hooked up to switches that prefer mode-4.

 

Also, this should help a bit.

 

https://help.ubuntu.com/community/UbuntuBonding

Edited by scruffters
Link to comment
Share on other sites

 I doubt you'll ever get something like this on a BYO box.

 

 

I have better remote management on my AsRock-based server, in that it works without needing a licence when the OS boots.  It's a BYO, spec in the .sig.  It's a basic feature of most server and workstation boards.  AsRock Rack, Supermicro, Asus WS, even Lenovo PCs have it.

Edited by HellDiverUK
Link to comment
Share on other sites

Revisiting a few things now, I really think it was more Gnome messing with the network setting that was doing it.

 

I didn't realise there was other server based boards with this on it (not that I know anything about BYO servers). It's an awesome feature!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...