Jump to content
RESET Forums (homeservershow.com)

5029C-T build


Recommended Posts

Hello everyone,


  I wanted to share with you my build of Supermicro's 5029C-T microserver.


Parts used:


- Case+Mobo: Microserver 5029C-T (CSE-721TQ-250B2 case with X11SCL-iF                                     motherboard) https://www.supermicro.com/en/products/system/midtower/5029/SYS-5029C-T.cfm

- CPU: Intel Xeon E-2278G

- CPU cooler: Noctua NH-L12S with a 120mm fan

- Additional case fan: NF-A8 PWM

- PCIe 4.0 port: LSI MegaRAID 9361-8i with short height covering plate

- Ram: 2x32G Nemix DDR4-2666 ECC UDIMM 2Rx8

- Storage :  - on the NvME M.2 connector: Samsung PM981 256Gb

                    - on the RAID card: - 2xSeagate Skyhawk 4TB 3.5"

                                                      - 2xToshiba X300 4TB 3.5"

                                                      - 1xSeagate Barracuda 4TB 2.5"

                                                      for a total of 14.5TB in RAID5

- additional cables: 2xSFF8643 - 4xS-ATA


This build was originally using a Xeon E-2226G with a Supermicro CPU cooler, as I couldn't wait any longer after more than 4 months delay on the delivery of the E-2278G. This proved to be a problematic setup, with the E-2226G's lack of hyperthreading not coping well with my 12 virtual machines and the CPU cooler constantly spinning up and down in a very noisy fashion (it's 60mm fan can go up to 5100rpm) as it struggled to cool the CPU properly. The E-2226G would often hit 95°, generating a very annoying series of beeps from the motheboard, even with the fan set to Full in IPMI. This was completely unacceptable so I ordered the Noctua CPU cooler and case fan. Looking back, not having used substitute thermal paste may have been a mistake there. The stock stuff may not have been up to the task.


  The case manual states that it can take a 9cm fan up front, which it certainly can spacewise, but there are no mounting holes for screws! The grid in the front can accomodate the 80mm fan (possibly two if really necessary), but using the rubber mounts supplied with the fan, you can only fix two in it. I don't have the tools to drill through sheet metal, so I couldn't add any more. However my 4xS-ATA-SFF8643 cables are keeping it position are they're rather long and had to be folded into the space between the motherboard and the fron of the case. The airflow generate by the fan running at aroung 1700-1800RPM has made a great difference in cooling and the CPU rarely go over 70°C, mostly operating at around 55°C if not less.


  The Noctua NH-L12S was a more difficult addition, as it required removing the glued on backplate under the motherboard to be replaced by it's own one. Be very careful removing it, it's very easy to damage the traces on the mobo if your hand slips. A hairdryer to warm the glue up is highly recommended. Once you've replaced the backplane the cooler isn't too difficult to put together, and it's a very snug fit. I recommended placing the CPU cooler so the heat pipes are over the NvME drive, not over the RAM sticks. This cooler is 70mm high, which leaves about 3-4mm clearance with the drive cage. The noise level is now extremely low, no more pesky fan spin ups and I can run the fans on Optimal setting (from within the IPMI website on the BMC) without overheating the system, and the 12cm fan runs on average around 1100rpm as does the CPU cooler's fan. I didn't change the larger case fan as it's already a 25dB part, but that may change later on.


  For storage, I took my 4 3.5" drives from the Gen8 Microserver and transferred them over. This case does come with drive cages, which is really nice. I added the 2.5" drive so I could increase my storage to 14.5Tb for my new Nextcloud setup, more on that below. The LSI 9361-8i was chosen as it was FreeBSD compatible and came with 1Gb of cache. It's also futureproof as it can deal with 6 and 12Gb/s drives. I haven't acquired the Battery Backed Unit for this card yet, as it requires a second PCIe slot, although it doesn't seem to draw power from it. Maybe I'll be able to mount it above the RAID card somehow? Something to look into. I'm a bit perplexed at the lack of 2.5" that are larger than 4-5Tb. Was this the case before Covid19 struck? I may add another 4Tb drive later on in the last slot. Seven drives in such a small case is really great!


  The BIOS on this system is a bit perplexing, if you change something wrong, it hangs on boot. I've had to remove the battery more than once to get it back to normal. I've got it setup to boot UEFI, which means you can't access the LSI card's BIOS. You'll have to build the array first in BIOS mode, then switch over to UEFI boot. I haven't flashed the BIOS, it's still on the November 2019 version. I also had a problem during OS install if I had the hardware watchdog installed as it would reboot on it's own at the most unexpected moment! I've got that disabled now.


  Operating system-wise, I went once again with FreeBSD so I could run my VMs in bhyve. It installed fine on the NvME drive and I restored my configs using duplicity off the 4Tb external USB 3.0 drive. Both the NvME and the RAID5 drive are formatted with ZFS. One thing to note is that their are two drivers in FreeBSD for the 9361-8i, the older mfi one and the mrsas one. I went for the later, and had to download the MegaRAID.sh blob of scripts to be able to monitor the card. Unfortunately at this time you can't access the SMART information on the individual drives for this card with mrsas, which you can with mfi. YMMV. The virtual drive seems faster with the mrsas driver for me.


  This setup can handle without using swap (pagefile you Windows types) running 4 OpenBSD 6.7 VMs (zabbix monitor+logfile server, mail server, package builder and hosting server, and irc client), 1 Ubuntu 20.04 VM (Collabora running in Docker with Apache 2 proxy for Nextcloud), and 7 Windows VMs (2xAD VMs on Server Core 2019, 1x Windows Admin Center on Server Core 2019, 1x WSUS on Server Core 2019, and 3xVM for Win10pro, Win8.1pro and Win7SP1pro). This runs without a sweat, with most VMs having between 2-4Gb and a couple of CPU cores. The Win10pro VM, along with the zabbix monitor, mail server and irc client are currently on the NvME zpool, the rest of the VMs are on the RAID5 zpool. I also have a FreeBSD jail for my Nextcloud setup running off the RAID5 zpool. Having 13Tb available for it is great and I'm now sharing it with my friends and family. Makes life a lot easier for storing family photos and whatnot securely and away from the search engines etc. Nextcloud is running on Nginx, with PHP7.4 and Postgresql 11.8, a fairly fast setup, along with Redis as the memory cache daemon. Don't ever run Nextcloud without it, it's too slow even on modern hardware, especially when using your share as a network drive as I do now.


  All in all, this wasn't a cheap setup (I estimate having spent around 2500$ on the lot) compared to the Microserver Gen10Plus, and it's not as polished as iLO4/5, but I'm very satisfied with it. Of course if you need 10Gbps networking as well as hardware raid, this motherboard isn't for you, but it can be swapped for an embedded Xeon version with 2x10Gbps. At least the licensing for the IPMI is only 30$ and it's only needed to upgrade the BIOS. You'll find if you want to use the remote install drive feature it can't use a local iso image, you have to have a Windows share setup for it, but that's fairly trivial. I just used a USB stick. All went smoothly once I learnt to  not mess with the BIOS settings too much! I'm glad I got the cooling right with the replacement CPU cooler and it's all running smoothly. A perfect machine for the homelab!




  • Like 1
Link to comment
Share on other sites

Here's the dmesg:


Copyright (c) 1992-2019 The FreeBSD Project.
Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993, 1994
        The Regents of the University of California. All rights reserved.
FreeBSD is a registered trademark of The FreeBSD Foundation.
FreeBSD 12.1-RELEASE-p6 GENERIC amd64
FreeBSD clang version 8.0.1 (tags/RELEASE_801/final 366581) (based on LLVM 8.0.1)
VT(efifb): resolution 1024x768
module_register: cannot register pci/mrsas from kernel; already loaded from mrsas.ko
Module pci/mrsas failed to register: 17
CPU microcode: no matching update found
CPU: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (3408.31-MHz K8-class CPU)
  Origin="GenuineIntel"  Id=0x906ed  Family=0x6  Model=0x9e  Stepping=13
  AMD Features=0x2c100800<SYSCALL,NX,Page1GB,RDTSCP,LM>
  AMD Features2=0x121<LAHF,ABM,Prefetch>
  Structured Extended Features2=0x40000000<SGXLC>
  Structured Extended Features3=0xbc000600<MD_CLEAR,IBPB,STIBP,L1DFL,ARCH_CAP,SSBD>
  TSC: P-state invariant, performance statistics
real memory  = 68717379584 (65534 MB)
avail memory = 66736189440 (63644 MB)
Event timer "LAPIC" quality 600
ACPI APIC Table: < >
FreeBSD/SMP: Multiprocessor System Detected: 16 CPUs
FreeBSD/SMP: 1 package(s) x 8 core(s) x 2 hardware threads
random: unblocking device.
ioapic0 <Version 2.0> irqs 0-119 on motherboard
Launching APs: 1 5 2 3 14 4 15 10 9 7 13 12 6 8 11
Timecounter "TSC-low" frequency 1704154490 Hz quality 1000
random: entropy device external interface
000.000017 [4335] netmap_init               netmap: loaded module
[ath_hal] loaded
module_register_init: MOD_LOAD (vesa, 0xffffffff8112e0f0, 0) error 19
random: registering fast source Intel Secure Key RNG
random: fast provider: "Intel Secure Key RNG"
kbd1 at kbdmux0
efirtc0: <EFI Realtime Clock> on motherboard
efirtc0: registered as a time-of-day clock, resolution 1.000000s
cryptosoft0: <software crypto> on motherboard
acpi0: <SUPERM SUPERM> on motherboard
acpi0: Power Button (fixed)
cpu0: <ACPI CPU> on acpi0
hpet0: <High Precision Event Timer> iomem 0xfed00000-0xfed003ff on acpi0
Timecounter "HPET" frequency 24000000 Hz quality 950
Event timer "HPET" frequency 24000000 Hz quality 350
Event timer "HPET1" frequency 24000000 Hz quality 340
Event timer "HPET2" frequency 24000000 Hz quality 340
Event timer "HPET3" frequency 24000000 Hz quality 340
Event timer "HPET4" frequency 24000000 Hz quality 340
Event timer "HPET5" frequency 24000000 Hz quality 340
Event timer "HPET6" frequency 24000000 Hz quality 340
Event timer "HPET7" frequency 24000000 Hz quality 340
attimer0: <AT timer> port 0x40-0x43,0x50-0x53 irq 0 on acpi0
Timecounter "i8254" frequency 1193182 Hz quality 0
Event timer "i8254" frequency 1193182 Hz quality 100
Timecounter "ACPI-fast" frequency 3579545 Hz quality 900
acpi_timer0: <24-bit timer at 3.579545MHz> port 0x1808-0x180b on acpi0
pcib0: <ACPI Host-PCI bridge> port 0xcf8-0xcff on acpi0
pci0: <ACPI PCI bus> on pcib0
pcib1: <ACPI PCI-PCI bridge> irq 16 at device 1.0 on pci0
pci1: <ACPI PCI bus> on pcib1
AVAGO MegaRAID SAS FreeBSD mrsas driver version: 07.709.04.00-fbsd
mrsas0: <AVAGO Invader SAS Controller> port 0x6000-0x60ff mem 0x91300000-0x9130ffff,0x91200000-0x912fffff irq 16 at device 0.0 on pci1
mrsas0: FW now in Ready state
mrsas0: Using MSI-X with 16 number of vectors
mrsas0: FW supports <96> MSIX vector,Online CPU 16 Current MSIX <16>
mrsas0: max sge: 0x106, max chain frame size: 0x1000, max fw cmd: 0x39f
mrsas0: Issuing IOC INIT command to FW.
mrsas0: IOC INIT response received from FW.
mrsas0: FW supports SED 
mrsas0: FW supports JBOD Map 
mrsas0: Jbod map is supported
mrsas0: VD created target ID: 0x0
mrsas0: max_fw_cmds: 927  max_scsi_cmds: 911
mrsas0: MSI-x interrupts setup success
mrsas0: mrsas_ocr_thread
xhci0: <Intel Cannon Lake USB 3.1 controller> mem 0x91700000-0x9170ffff irq 16 at device 20.0 on pci0
xhci0: 32 bytes context size, 64-bit DMA
usbus0 on xhci0
usbus0: 5.0Gbps Super Speed USB v3.0
pci0: <memory, RAM> at device 20.2 (no driver attached)
pci0: <serial bus> at device 21.0 (no driver attached)
pci0: <serial bus> at device 21.1 (no driver attached)
pci0: <simple comms> at device 22.0 (no driver attached)
pci0: <simple comms> at device 22.4 (no driver attached)
pcib2: <ACPI PCI-PCI bridge> irq 18 at device 27.0 on pci0
pci2: <ACPI PCI bus> on pcib2
igb0: <Intel(R) PRO/1000 PCI-Express Network Driver> port 0x5000-0x501f mem 0x91600000-0x9167ffff,0x91680000-0x91683fff irq 17 at device 0.0 on pci2
igb0: Using 1024 TX descriptors and 1024 RX descriptors
igb0: Using 4 RX queues 4 TX queues
igb0: Using MSI-X interrupts with 5 vectors
igb0: Ethernet address: 3c:ec:ef:02:d0:5e
igb0: netmap queues/slots: TX 4/1024, RX 4/1024
pcib3: <ACPI PCI-PCI bridge> irq 19 at device 27.6 on pci0
pci3: <ACPI PCI bus> on pcib3
igb1: <Intel(R) PRO/1000 PCI-Express Network Driver> port 0x4000-0x401f mem 0x91500000-0x9157ffff,0x91580000-0x91583fff irq 18 at device 0.0 on pci3
igb1: Using 1024 TX descriptors and 1024 RX descriptors
igb1: Using 4 RX queues 4 TX queues
igb1: Using MSI-X interrupts with 5 vectors
igb1: Ethernet address: 3c:ec:ef:02:d0:5f
igb1: netmap queues/slots: TX 4/1024, RX 4/1024
pcib4: <ACPI PCI-PCI bridge> irq 16 at device 28.0 on pci0
pci4: <ACPI PCI bus> on pcib4
pcib5: <ACPI PCI-PCI bridge> irq 17 at device 28.1 on pci0
pci5: <ACPI PCI bus> on pcib5
pcib6: <ACPI PCI-PCI bridge> irq 17 at device 0.0 on pci5
pci6: <ACPI PCI bus> on pcib6
vgapci0: <VGA-compatible display> port 0x3000-0x307f mem 0x90000000-0x90ffffff,0x91000000-0x9101ffff irq 17 at device 0.0 on pci6
vgapci0: Boot video device
pcib7: <ACPI PCI-PCI bridge> irq 16 at device 29.0 on pci0
pci7: <ACPI PCI bus> on pcib7
nvme0: <Generic NVMe Device> mem 0x91400000-0x91403fff irq 16 at device 0.0 on pci7
pci0: <simple comms> at device 30.0 (no driver attached)
isab0: <PCI-ISA bridge> at device 31.0 on pci0
isa0: <ISA bus> on isab0
pci0: <serial bus> at device 31.5 (no driver attached)
acpi_button0: <Sleep Button> on acpi0
acpi_tz0: <Thermal Zone> on acpi0
uart0: <16550 or compatible> port 0x3f8-0x3ff irq 4 flags 0x10 on acpi0
uart0: console (115200,n,8,1)
uart1: <16550 or compatible> port 0x2f8-0x2ff irq 3 on acpi0
ipmi0: <IPMI System Interface> port 0xca2,0xca3 on acpi0
ipmi0: KCS mode found at io 0xca2 on acpi
acpi_syscontainer0: <System Container> on acpi0
orm0: <ISA Option ROM> at iomem 0xc0000-0xc7fff pnpid ORM0000 on isa0
atrtc0: <AT realtime clock> at port 0x70 irq 8 on isa0
atrtc0: Warning: Couldn't map I/O.
atrtc0: registered as a time-of-day clock, resolution 1.000000s
Event timer "RTC" frequency 32768 Hz quality 0
atrtc0: non-PNP ISA device will be removed from GENERIC in FreeBSD 12.
coretemp0: <CPU On-Die Thermal Sensors> on cpu0
est0: <Enhanced SpeedStep Frequency Control> on cpu0
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
Timecounters tick every 1.000 msec
mrsas0: Disestablish mrsas intr hook
ugen0.1: <0x8086 XHCI root HUB> at usbus0
uhub0: <0x8086 XHCI root HUB, class 9/0, rev 3.00/1.00, addr 1> on usbus0
nvd0: <SAMSUNG MZVLB256HAHQ-00000> NVMe namespace
nvd0: 244198MB (500118192 512 byte sectors)
ipmi0: IPMI device rev. 1, firmware rev. 1.23, version 2.0, device support mask 0xbf
da0 at mrsas0 bus 0 scbus0 target 0 lun 0
da0: <AVAGO MR9361-8i 4.68> Fixed Direct Access SPC-3 SCSI device
da0: Serial Number 002ffd610b0a346c26eaf48b09b00506
da0: 150.000MB/s transfers
da0: 15259648MB (31251759104 512 byte sectors)
ipmi0: Number of channels 2
ipmi0: Attached watchdog
ipmi0: Establishing power cycle handler
Root mount waiting for: usbus0
ugen0.3: <vendor 0x0557 product 0x2419> at usbus0
ukbd0 on uhub1
ukbd0: <vendor 0x0557 product 0x2419, class 0/0, rev 1.10/1.00, addr 2> on usbus0
kbd2 at ukbd0
Root mount waiting for: usbus0
Root mount waiting for: usbus0
ugen0.4: <Seagate Expansion> at usbus0
umass0 on uhub0
umass0: <Seagate Expansion, class 0/0, rev 3.00/7.10, addr 3> on usbus0
umass0:  SCSI over Bulk-Only; quirks = 0x8100
umass0:2:0: Attached to scbus2
da1 at umass-sim0 bus 0 scbus2 target 0 lun 0
da1: <Seagate Expansion 0710> Fixed Direct Access SPC-4 SCSI device
da1: Serial Number NAADMFQE
da1: 400.000MB/s transfers
da1: 3815447MB (7814037167 512 byte sectors)
da1: quirks=0x2<NO_6_BYTE>
GEOM: da1: the primary GPT table is corrupt or invalid.
GEOM: da1: using the secondary instead -- recovery strongly advised.
GEOM_ELI: Device nvd0p2.eli created.
GEOM_ELI: Encryption: AES-XTS 128
GEOM_ELI:     Crypto: software
igb0: link state changed to UP
lo0: link state changed to UP
igb0: link state changed to DOWN
igb1: link state changed to UP
ums0 on uhub1
ums0: <vendor 0x0557 product 0x2419, class 0/0, rev 1.10/1.00, addr 2> on usbus0
ums0: 3 buttons and [Z] coordinates ID=0
igb0: link state changed to UP

I'll try and do some pics later when people aren't connected to the nextcloud. It took me a few months to save up for this configuration, and yes it is fairly expensive but does allow for more flexibility than the current HPE MIcroserver. 

Link to comment
Share on other sites

As requested, here are some pictures. You can see how tight a fit the Noctua NH-L12S is here, with it covering the ramsticks and barely having any margin when slid under the backplane and case itself.






And some more: you can see how I was cabling the drive cables with just one fan here as well as just how big the CPU cooler is compared to the motherboard. I chose to fit it with the piping on the SSD side as I wasn't certain there was clearance over the RAM.



I've now added a second fan and recabled the NFF8643-4xS-ATA cables to fit at the top of the case.



Here's the current reading from the ipmitool sensor command (FAN1=CPU Fan, FAN2 & A additional 80mm fans, FAN3 stock case fan):


CPU Temp         | 56.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 95.000    | 100.000   | 100.000   
PCH Temp         | 41.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 85.000    | 90.000    | 105.000   
System Temp      | 33.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 80.000    | 85.000    | 90.000    
Peripheral Temp  | 49.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 80.000    | 85.000    | 90.000    
VcpuVRM Temp     | 54.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 95.000    | 100.000   | 105.000   
M2NVMeSSD Temp   | na         |            | na    | na        | na        | na        | na        | na        | na        
FAN1             | 1100.000   | RPM        | ok    | 200.000   | 300.000   | 500.000   | 25300.000 | 25400.000 | 25500.000 
FAN2             | 1300.000   | RPM        | ok    | 200.000   | 300.000   | 500.000   | 25300.000 | 25400.000 | 25500.000 
FAN3             | 1300.000   | RPM        | ok    | 200.000   | 300.000   | 500.000   | 25300.000 | 25400.000 | 25500.000 
FANA             | 1400.000   | RPM        | ok    | 200.000   | 300.000   | 500.000   | 25300.000 | 25400.000 | 25500.000 
DIMMA1 Temp      | 36.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 80.000    | 85.000    | 90.000    
DIMMB1 Temp      | 37.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 80.000    | 85.000    | 90.000    
12V              | 12.100     | Volts      | ok    | 10.475    | 10.605    | 11.060    | 13.335    | 13.660    | 13.790    
5VCC             | 5.086      | Volts      | ok    | 4.006     | 4.126     | 4.426     | 5.716     | 6.016     | 6.136     
3.3VCC           | 3.265      | Volts      | ok    | 2.670     | 2.738     | 2.959     | 3.792     | 4.013     | 4.081     
VBAT             | 0x4        | discrete   | 0x04ff| na        | na        | na        | na        | na        | na        
Vcpu             | 1.091      | Volts      | ok    | 0.006     | 0.006     | 0.006     | 1.714     | 1.714     | 1.714     
VDimm            | 1.245      | Volts      | ok    | 0.951     | 0.972     | 1.049     | 1.350     | 1.427     | 1.448     
5VSB             | 5.101      | Volts      | ok    | 3.961     | 4.051     | 4.381     | 5.611     | 5.941     | 6.031     
3.3VSB           | 3.281      | Volts      | ok    | 2.618     | 2.686     | 2.890     | 3.723     | 3.927     | 3.995     
1.8V_PCH         | 1.869      | Volts      | ok    | 1.365     | 1.401     | 1.509     | 1.950     | 2.058     | 2.094     
1.2V_BMC         | 1.229      | Volts      | ok    | 0.956     | 0.984     | 1.061     | 1.362     | 1.439     | 1.467     
1.05V_PCH        | 1.068      | Volts      | ok    | 0.823     | 0.844     | 0.914     | 1.173     | 1.243     | 1.264     
Chassis Intru    | 0x1        | discrete   | 0x0100| na        | na        | na        | na        | na        | na        
AOC_SAS Temp     | 75.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 100.000   | 105.000   | 110.000   
HDD Temp         | 43.000     | degrees C  | ok    | 5.000     | 5.000     | 10.000    | 50.000    | 55.000    | 60.000    
HDD Status       | 0x1        | discrete   | 0x0100| na        | na        | na        | na        | na        | na        

Edited by Noth
fan enumeration
Link to comment
Share on other sites

Also of note: I'm using the Low Voltage Adapters Noctua supply with their kit, so the RPM is nice and low. The two extra case fans are kept together thanks to the front panel's plastic holders on each end of the front bezel, although not very tightly, Having rubber "feet" on the fans helps them not slip everywhere, not that there's much clearance. The fans's power cables also assist in maintaining them in place. It's just gotten a bit quieter on my server shelf :)



Link to comment
Share on other sites

49 minutes ago, Trig0r said:

Yeah thats deffo a bit tight..


Not sure my OCD would be able to deal with those cables either :D


Well the cables were hard to source locally, I didn't want to buy anything that wasn't in stock, since Covid meant huge delays on delivery, so I ended with ones that were far too long. 30cm length would have been enough and maybe in a future upgrade (when I add a sixth drive) I'll try and change the cables. Also still wondering how to fit the BBU card...

Link to comment
Share on other sites

  • 6 months later...

Happy New Year everyone! I ordered some bits for the server in the post Christmas sales, and it's hasn't gone completely swimmingly. I ordered:


- 1x960Gb Seagate Ironwolf 510 NvME SSD

- CacheVault Module kit for the LSI 9361-8i

- 1x4Tb 2.5" Seagate Barracuda


The BBU kit fits fine and appears to be working normally. You don't need an additional card to hang the BBU itself on, just stick it ontop of the case next to the top drive, it'll fit snugly if you have it in the mounting case. Adding the drive to the RAID array pushed storage to 18Tb usable. However the SSD has been nothing but trouble: the NvME port would reset whenever the transfers got heavy (even from the USB 3.0 backup drive to the RAID array) making the system crash. DO NOT USE THIS DRIVE WITH THIS MOTHERBOARD, THEY ARE INCOMPATIBLE.


I've had it replaced with the same result. I conclude that it's incompatible with my motherboard, so I've returned the replacement and ordered a 1Tb Samsung PM981, which is the larger version of the original SSD. Hopefully this will run a lot better and I get back to troubleshooting why Nextcloud won't restore cleanly (restore PostgreSQL db, data and config and when I try to login it just errors out telling me to clear cookies... I think I'm going to have to recreate all the accounts from scratch with a fresh db instead and then add the data since that just lives in it's own dir anyway).


After a bit more than six months, the system has been stable and stayed cool and performed well. I haven't overwhelmed it's capacity like I did the HP Proliant Microserver, so very pleased so far apart from incompatibility with the SSD. Of course my drives are all more or less 5400RPM so slowish but good enough for my needs.


Speaking of which, is there some kind of technological barrier preventing 2.5" drives going past the 5Tb mark? They just don't seem to available as individual units. Is it because they're close to being overtaken by (much more expensive so far) SSDs?


Edited by Noth
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Create New...