Jump to content
RESET Forums (homeservershow.com)

Leaderboard

Popular Content

Showing content with the highest reputation since 07/03/2021 in all areas

  1. The host device has to have a 'esata-p' port to supply power to the client device. The regular esata doesnt have power. I dont have the specs on the n40l, but esata-p ports are uncommon, even back when the n40 was new I still have an external usb drive enclosre with a esata connector. Works great, but usb3 devices have largely obsoleted essta
    2 points
  2. Just reporting back to say that I've successfully quietened my ML350p thanks to the advice given here. I decided to remove the whole fan cage and bridged the relevant connections on the little PCB rather than fiddling with the fan connectors. I didn't bother adding a grounding strap to the chassis like XeonUnicorn, but might do that later on. As others have mentioned, I had to remove the CPU retaining clamps to get the coolers to sit flush. This probably isn't ideal as the coolers are now the only thing holding the CPUs in place, but as this is just a homelab project I can live with it. For the time being, I'm running the fans from a cheap fan splitter board that takes a SATA power supply. At some point I plan on adding some more case fans and an arduino based fan controller, possibly afancontrol (https://afancontrol.readthedocs.io). I ran the server with the lid off for a little bit while testing a GPU and managed to trigger a thermal shutdown due to the HD controller hitting 100C. It's ok with the lid on however, sugessting the noctua coolers do provide adequate airfolw through the chassis. After replacing the CPU coolers, I realised that the power supply fans actually make quite a bit of noise too, so I replaced those with Noctua fans and now I finally have the server at an acceptable noise level. The colour coding on my PSU fans was different to that mentioned by ook4mi earlier in this thread. I checked the pinout with an oscilloscope and the tachometer wire was white on mine IIRC.
    1 point
  3. Its mainly a media server it doesnt matter what drive they are on plex knows where they are this way only the drive in use gets spun up instead of a whole array for a single file e.g. to save power It all closed up the fans are normally at something like 11% but its 32°C atm here so the fan is at 40% The drives are a tad warm but they should spin down soon i rebooted it about 40 mins ago and its on a 60 min spindown window edit actually the hottest drives are in the sas enclosure
    1 point
  4. Upgraded a few drives today on the Primary Gen8 Microserver Gen8 (Primary) Xeon E3-1240 v2 16 GB PC3-14900 (Crucial CT2KIT102472BA186D) Intel PCH Ports Port 1 250GB Samsung 850 Evo 2.5" Port 2 10TB WDC WD100EZAZ 3.5" Port 3 10TB WDC WD100EZAZ 3.5" Port 4 8TB WDC WD80EZAZ 3.5" Port 5 8TB WDC WD80EZAZ 3.5" LSI SAS 9207-4i4e (PCIE 3.0 x8) Internal 1 5TB Seagate ST5000LM000 2.5" 15mm Internal 2 5TB Seagate ST4000LM000 2.5" 15mm Internal 3 4TB Seagate ST4000LM016 2.5" 15mm Internal 4 4TB Seagate ST4000LM024 2.5" 15mm External 5 8TB WDC WD80EZZX 3.5" (Sas Enclosure) External 6 8TB WDC WD80EZZX 3.5" (Sas Enclosure) External 7 8TB WDC WD80EZZX 3.5" (Sas Enclosure) External 8 8TB WDC WD80EZZX 3.5" (Sas Enclosure) 10 Port USB 3.0 Hub (USB Attached SCSI) Port 1 2TB Seagate Expansion Portable (Samsung M9T) Port 2 2TB Seagate Expansion Portable (Samsung M9T) Port 3 2TB Seagate Expansion Portable (Samsung M9T) Port 4 2TB Seagate Backup+ (Samsung M9T) Port 5 2TB Seagate Backup+ (Samsung M9T) Port 6 2TB Seagate Backup+ (Samsung M9T) Internal Headers Samsung UHS-1 Micro SD 128GB (Onboard Micro SD Reader) 98.3TB
    1 point
  5. Hello All, I've also recently purchased an ml350p. My background is electrical industrial instrumentation and control; not really a PC guy... yet. I'm working on it lol. My first job was getting my GPU to work. Struggled to find consensus or a well explained pinout with regards to using consumer GPU cards in these machines. I couldn't find the the OEM cable anywhere. It took me some time to get it working. My issue turned out to be power related. So, for other noobs getting scared off doing this like I almost was, here's what I have confirmed, using a Fluke 789: (for reference) Normal PCIe8 pinout: Pins 1,2,3 are +12V Pins 5,7,8 are 0V (GND) Pin 4 (Sense 8 pin plug) 0V (GND) Pin 6 (Sense 6 pin plug) 0V (GND) (what I found) HP PCIe Pinout (when compared to the above, using the same pin numbers) Pins 1,2,3,4 are +12V Pins 5,7,8 are 0V (GND) Pin 6 (Sense) 0V (GND) I have found references on reddit and other forums indicating that Pin 3 in unused, and its position is shifted to Pin 4. My tests show it at 12V, I'm not sure if there are some smarts in the machine that can switch this pin? Maybe mine is not standard? Consensus is that Pins 1-4 should be at 12V and Pins 5-8 at 0V or GND. I can confirm that my 6GB GTX Titan works with this mod. I modified a standard PCIe 8 to 8+6 cable by leaving the GPU side as per normal and changing the mobo side to what HP expects. I had to make a custom cable since the mine was not long enough. So I only ended up using the plugs really, I tried to remove as much of the original cable as possible, it seemed like sh** cable. Since I work in the industrial sector I used what I had access to, which was 1mm marine grade flex https://www.firstflex.co.nz/product/mst001-0/ I made it 450mm long and it was a comfortable fit. If I were to make it again, I would increase the length to 500mm for looming purposes. Next I will be working of fans. Thing sounds like an aircraft; my wife was suitably unimpressed when i brought it inside. I would like to make use of the original connectors, fan cage etc, if I can, and modify as little as possible. Maybe make a custom PCB to invert the PWM and add some current limiting for ordinary fans. If i got into full mod mode, I have a ABB commander 350 in the garage somewhere, might look cool mounted inside the unused drive bay. I have also started soldering up a custom water-block from some busbar plate I have around the place, I'll use Swagelok fittings and 1/2" Teflon tube. MPa rated, so should be fine for this pressure range ha ha. Still not figured out how to mount everything yet. Got a million other projects on so we'll see how long it takes me to get round to finishing this one. StretchNZ if you're about mate, would be nice to chat about your machine. I live in Auckland. -Jim
    1 point
  6. looks good but after the last three years of ransomware and other security issues, I would be very leery of trusting them for a router dont get me wrong, I love qnap hardware,, best value for an intel based NAS. Works solidly as a file and media server. they tried to open up the devices and did a half assed job , especially to us noobs in the home market also, only supports 2 x 2.5" drives, which is some low by the standard build on this forum thats what they look like, baby blue soft matte finish
    1 point
  7. Thanks for sharing. Boy talk about memories... I miss all those guys Mike, Jim, John and the format of the old "Home Server Show" good times indeed.....
    1 point
  8. Great video on Qufirewall from our old friend Mike Faucher, may be useful in other platforms as well e.i. why would you enable a firewall on your NAS Good description of what it is and how to to set up https://youtu.be/CydCrgw7TGM I met Mike Faucher at a couple of the legendary homeserver show meetups. I think his demos caused me to go with a qnap nas Good guy. He has dry, but concise , clear videos. Lots of content on qnap I was very disappointed in qufirewall, but it's clear that part of the problem was how i set it up, still lousy documentation and launch by QNAP Will try it again after its updated this month
    1 point
  9. Looking at the connector, you should be able to use a regular sata cable to patch your connector to a drive. You will have to power the drive separately
    1 point
  10. Surprised you haven't already tried to sneak one past the FD to be honest, you must be slipping in your old age
    1 point
  11. Hi all, Here's a guide I would like to share around Windows Storage spaces and creating a 4x drive Parity pool In a nutshell I have Windows Serer 2019 and storage space parity pool running very nicely on my Gen8. Here's the configuration I used and How to copy my setup. (I still believe ZFS or UnRAID are far better choice as a filesystem on these limited servers, but if you need Windows like I do, then storage spaces can be an excellent alternative.) This is my "best effort" guide and by no means perfect. It does however yield excellent results for both read and write speeds. Gen8 Microserver 16GB RAM CPU Stock for now (1270 V3 on it's way) Disks 4x 3TB WD NAS drives in front bays SSD - Samsung Evo 850 265 First lesson, DONT use the Windows GUI to create the pool or Virtual disk as the GUI applies terrible defaults that you can't edit and will ruin performance. Also make sure you're on the latest version of Windows server as a LOT has changed and been improved recently. You must use PowerShell. Terms: PhysicalDiskRedundancy - Parity Columns - 4 (The data segments stripped to disks. Should match your 4 disks) Interleve - 256K (The amound of data written to each "column" or disk. In this case 256KB interleave gives us a 64K write to each disk) LogicalSectorSize - 4096 PhysicalSectorSize - 4096 REFS/NTFS Cluster - 64K Overall configuration: 4 drive file system, one bootable SSD in RAID mode. BIOS setup initial F9 into the BIOS and set the B120i controller into RAID mode F5 into the RAID manager and create 1 individual RAID0 logical drive for the SSD Set the SSD as the preferred boot drive (Yes in the same screen) Set the cluster size to 63 Enable caching Windows install Install Windows 2019 Server Standard GUI edition from ISO Offer up the B120i RAID drivers via a USB stick so the wizard can see the SSD RAID0 drive. Filename p033111.exe (Have them extracted) Windows update and patch and reboot BIOS setup post windows Once windows is up and running go back into the F5 RAID manager and finish the setup of the 4 front drives into 4x RAID0 Check the SSD is still set as the preferred boot drive (Yes in the same screen) Set the cluster size to 63 Windows config of storage spaces At this point you should see 4 individual drives ready to be used as a Storage pool Try to set each disk to have a cache (Not all drives support this) Win + X to open the side menu Device Manager Expand Disk Drives Right Click the "HP Logical Volume" for each drive Check - "Enable write caching on the device" (If it doesn't work don't stress, it's optional but nice to have) Powershell - Run as Admin Determine the physical drisks available for the pool we're about to create Get-PhysicalDisk | ft friendlyname, uniqueid, mediatype, size -auto Your output will look something like this, so identify the 4 drives that are the same and take note of their uniqueID Mine are the bottom four drives all 3TB in size friendlyname uniqueid size ------------ -------- ---- SSD HP LOGICAL VOLUME 600508B1001C5C7A1716CCDD5A706248 250023444480 HP LOGICAL VOLUME 600508B1001CAC8AFB32EE6C88C5530D 3000559427584 HP LOGICAL VOLUME 600508B1001C51F9E0FF399C742F83A6 3000559427584 HP LOGICAL VOLUME 600508B1001C2FA8F3E8856A2BF094A0 3000559427584 HP LOGICAL VOLUME 600508B1001CDBCE168F371E1E5AAA23 3000559427584 Rename the friendly name based on the UniqueID from above and set to "HDD type" Set-Physicaldisk -uniqueid "Your UniqueID" -newFriendlyname Disk1 -mediatype HDD You will need to run that 4 times with each UniqueID code and create a new friendly name for each drive. I called mine "Drive 1, Drive 2" etc Set-Physicaldisk -uniqueid "600508B1001C2FA8F3E8856A2BF094A0" -newFriendlyname Disk1 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CDBCE168F371E1E5AAA23" -newFriendlyname Disk2 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CAC8AFB32EE6C88C5530D" -newFriendlyname Disk3 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001C51F9E0FF399C742F83A6" -newFriendlyname Disk4 -mediatype HDD Verify the disks have been set correctly The following example shows which physical disks are available in the primordial server and CAN be used in the new Pool. You're just checking here if the friendly name renaming worked and they are all set to HDD type. Primordial just means on your local server and available. Get-StoragePool -IsPrimordial $true | Get-PhysicalDisk | Where-Object CanPool -eq $True You should see your four drives with nice names that you set like "Disk1" Now find out your sub system name, as we need this for the next command. Just take note of it. Example "Windows Storage on <servername>" Mine is ""Windows Storage on Radaxian" Get-StorageSubSystem The following example creates a new storage pool named "Pool1" that uses all available disks and sets the cluster size. New-StoragePool -FriendlyName Pool1 -StorageSubsystemFriendlyName "Windows Storage on Radaxian" -PhysicalDisks (Get-PhysicalDisk -CanPool $True) -LogicalSectorSizeDefault 64KB Now create the Virtual Disk on the new pool with 4x disks and Partity set correctly. (This is critical to do via PowerShell) New-VirtualDisk -StoragePoolFriendlyName "Pool1" -FriendlyName "VDisk1" -ResiliencySettingName Parity -NumberOfDataCopies 1 -NumberOfColumns 4 -ProvisioningType Fixed -Interleave 256KB -UseMaximumSize Those two commands should complete without error, if they don't go back and check your syntax Go back into the Windows GUI and open this Server Manager\File and Storage Services\Servers You should see the Storage pool listed and the Virtual disk we created in the previous steps. Storage pool - Pool1 Virtual Disk - VDisk1 Select Disks in the GUI Identify your new VDisk1 and right click it. Set to Online, this will also set it to use a GPT boot record On the same screen in the below pane Volumes Click TASKS and select "New Volume" Select REFS and Sector size of 64K Enter a volume name like "Volume1" or whatever you want to call it Select a drive letter such as Z (You can use NTFS here for slightly better performance, but I'm sticking to REFS as it has some benefits) You'll now have a Storage pool, Virtual disk on top and a volume created with optimal settings Go back into Power Shell Enable power protected status if applicable (Just try it, no harm) (Ideally here you should have your server connected to a basic UPS to protect it from power outages) Set-StoragePool -FriendlyName Pool1 -IsPowerProtected $True Check if the new sector sizes of Virtual disk and all relevant settings are correct Get-VirtualDisk | ft FriendlyName, ResiliencySettingName, NumberOfColumns, Interleave, PhysicalDiskRedundancy, LogicalSectorSize, PhysicalSectorSize Example output FriendlyName ResiliencySettingName NumberOfColumns Interleave PhysicalDiskRedundancy LogicalSectorSize PhysicalSectorSize VDisk1 Parity 4 262144 1 4096 4096 You're done.... enjoy the new Volume. At this point you can share out your new Volume "Z" and allow client computers to connect. Some other commands in Power Shell that I found useful Get more verbose disk details around sectors. Get-VirtualDisk -friendlyname Vdisk1 | fl Get-PhysicalDisk | select FriendlyName, Manufacturer, Model, PhysicalSectorSize, LogicalSectorSize | ft Check if TRIM is enabled. This output should be 0 fsutil behavior query DisableDeleteNotify If TRIM is not enabled, you can set it on with these commands fsutil behavior set disabledeletenotify ReFS 0 fsutil behavior set disabledeletenotify NTFS 0 Check the Power Protected status and cache Get-StorageAdvancedProperty -PhysicalDisk (Get-PhysicalDisk)[0] Once your data has been migrated back to your new pool from backup, make sure you run this command to "spread out the data" properly. This command rebalances the Spaces allocation for all of the Spaces in the pool named SQLPool. Optimize-StoragePool -FriendlyName "Pool1" I'm yet to get my Xeon in the mail, but once that's installed I think the disk performance will go up even higher as the stock CPU is junk.
    1 point
  12. I had someone pm me earlier asking how my Microserver's are setup so decided to update this with the current layout there are a pile of images here from over the years https://postimg.cc/gallery/kmd2PrK Microserver Gen8 (Primary) Xeon E3-1240 v2 16 GB PC3-14900 (Crucial CT2KIT102472BA186D) Intel PCH Ports Port 1 250GB Samsung 850 Evo 2.5" Port 2 10TB WDC WD100EZAZ 3.5" Port 3 10TB WDC WD100EZAZ 3.5" Port 4 8TB WDC WD80EZAZ 3.5" Port 5 8TB WDC WD80EZAZ 3.5" LSI SAS 9207-4i4e (PCIE 3.0 x8) Internal 1 4TB Seagate ST4000LM016 2.5" 15mm Internal 2 4TB Seagate ST4000LM024 2.5" 15mm Internal 3 2TB Samsung M9T 2.5" Internal 4 2TB Samsung M9T 2.5" External 5 8TB WDC WD80EZZX 3.5" (Sas Enclosure) External 6 8TB WDC WD80EZZX 3.5" (Sas Enclosure) External 7 8TB WDC WD80EZZX 3.5" (Sas Enclosure) External 8 8TB WDC WD80EZZX 3.5" (Sas Enclosure) 10 Port USB 3.0 Hub (USB Attached SCSI) Port 1 2TB Seagate Expansion Portable (Samsung M9T) Port 2 2TB Seagate Expansion Portable (Samsung M9T) Port 3 2TB Seagate Expansion Portable (Samsung M9T) Port 4 2TB Seagate Backup+ (Samsung M9T) Port 5 640gb Seagate Expansion Portable HGST 5K750 Port 6 320gb Seagate Backup+ Portable HGST 5K320 Internal Headers Samsung G2 Portable Deshelled 320GB (Samsung HM321HX) (Internal USB Header) Samsung UHS-1 Micro SD 128GB (Onboard Micro SD Reader) 89.6TB Microserver Gen8 (Backup) Xeon E3-1265L v2 16 GB Ram PC3-1600 Intel PCH Ports Port 1 240GB Kingston V300 2.5" Port 2 6TB WDC WD60EZRX 3.5" Port 3 6TB WDC WD50EZRX 3.5" Port 4 5TB Toshiba MD04ACA500 3.5" Port 5 5TB Toshiba MD04ACA500 3.5" LSI SAS 9207-4i4e (PCIE 3.0 x8) Internal 1 320GB Seagate ST320LT007 Internal 2 320GB Toshiba Mk3265GSX H Internal 3 320GB Toshiba Mk3265GSX H Internal 4 Empty External 5 5TB Toshiba MD04ACA500 3.5" (Sas Enclosure) External 6 5TB WD50EZRZ 3.5" (Sas Enclosure) External 7 3TB Seagate ST3000DM001 3.5" (Sas Enclosure) External 8 3TB Seagate ST3000DM001 3.5" (Sas Enclosure) USB 3.0 Rear Header DLSIN DLA614SUSJ3 4 Bay Enclosure (USB Attached SCSI) Port 1 2TB WDC WD20EADS 3.5" Port 2 2TB Samsung HD204UI 3.5" Port 3 1.5TB HD154UI 3.5" Port 4 750GB WDC WD750AAKS 3.5" 10 Port USB 3.0 Hub (USB Attached SCSI) Empty Internal Headers Empty (Internal USB Header) Samsung UHS-1 Micro SD 64GB (Onboard Micro SD Reader) 45.6TB https://i.postimg.cc/XNzxdVkz/Sas.jpg
    1 point
This leaderboard is set to Indiana - Indianapolis/GMT-04:00
×
×
  • Create New...