Jump to content
RESET Forums (homeservershow.com)

Leaderboard

  1. schoondoggy

    schoondoggy

    Moderators


    • Points

      162

    • Content Count

      8,964


  2. Dave

    Dave

    Administrators


    • Points

      67

    • Content Count

      4,910


  3. Trig0r

    Trig0r

    Moderators


    • Points

      47

    • Content Count

      1,416


  4. ShadowPeo

    ShadowPeo

    Moderators


    • Points

      28

    • Content Count

      406



Popular Content

Showing content with the highest reputation since 08/12/2018 in Posts

  1. 5 points
    Ok, it's time to solve all problems with overheating. Stuff: 1x Noctua NF-A8-5V fan with USB power adaptor cable 1x Akasa 3D Fan Guard AK-FG08-SL https://imgur.com/a/7npqgdc Now LSI 9271-4i temperature approximately no more than 57 degrees.
  2. 5 points
    As I do not plan to build any more of the SDM Rev4 brackets, I have decided to share the schematic. https://1drv.ms/u/s!Ao9ObX1BCpuXsstkeE7Jc5cyRmYTFQ?e=8ouDcP Edit- This is a Visio file. If you download it and open it Visio you will see the scale of the drawing.
  3. 3 points
    Thanks for the reply. It turns out I’d bent a few pins on the CPU socket. God only knows how!! I managed to carefully bend the pins back and I’ve managed to boot up with RAM slots 1 and 2 populated. All seems to be running ok for now. Spot the bent pins in the pic 😬
  4. 3 points
    Are you apologetic because you are truly sorry or that you got caught? - Said every Mom on the planet.
  5. 3 points
    Thanks for guidance on this! Just a note on the solution I settled with in the end for anyone else who might come across this thread looking to do the same thing - Tried a couple different cheaper eSATA cards with no success having the drives in ICYCube detected in CentOS. Looked into driver issues and again with no luck. The USB connection did work fine, but I wasn't overly happy with that as a solution. I instead got this HBA (10Gtek Host Bus Adapter - https://www.amazon.co.uk/gp/product/B01M9GRAUM) with Mini SAS connection, and swapped the ICYCube for an alternative external enclosure with compatible Mini SAS connection (SilverStone SST-TS431S-V2 - https://www.amazon.co.uk/gp/product/B0771S45X3). On setup this worked perfectly, with the drives properly detected. I'm actually also a little happier with the SilverStone unit in general so far, it feels of higher build quality and fan also seems a little quieter. I also decided to move away from using the B120i RAID controller, and hardware raid in general, and am now using ZFS / RAIDZ. Thanks again for help as I was working through this!
  6. 3 points
    It's not a server unless it sounds like a wind tunnel
  7. 3 points
    Did you enter the B120i configuration software, SSA or ACU, to check for it? If it is visible in the configuration software you would need to set it up as a single drive RAID0. You could also turn the B120i off and run the onboard ports as AHCI, depending on what else you intend to do with the other ports. It is a good idea to unplug the cable from the motherboard and reconnect it. From the Amazon description it looks like you have the correct cable, SFF-8087 to SATA, which would be a fan out cable.
  8. 2 points
    TDP of 69W is very high, even with the 65W replacement cooler from HP that costs a fortune, so I personally wouldn't use that CPU.
  9. 2 points
    It appears that should work, as it is unbuffered and ECC The MS Gen8 does not require HP branded memory.
  10. 2 points
    Sorted it. I created a UEFI boot USB and it worked.
  11. 2 points
    I have run four consumer grade SSD on it.
  12. 2 points
    Had some time on my hands and I’ve been running ESXi since I first got the microserver and upgrading until 6.5 update 2 with all the faff from downgrading drivers. I saw a path to migrate all my guests to hyper-v with Microsoft VM migration tool. So long story short, HP b120i raid driver with windows server 2019 hyper-v image works. Was a bit concerned as only listed for Server 2016. Installed openssh packages and windows admin center as I don’t use enterprise/pro windows at home. Even put on the windows subsystem for Linux so from command line I can be in a more familiar shell. Not a powershell user for the most part. I shutdown all guests and copied them off somewhere safe. Biggest VM disk was 1TB. Used pigz to speed up compression when I tar them all up, the 1TB was actually mostly allocation, Not data, which shrinks massively when compressed. linux subsystem was very handy to move back and expand after wiped the disks with windows install. All images needed connecting to new virtual router and reverted to DHCP on NIC and had to be readdressed where I had statics. Windows Admin Center allowed RDP to consoles for all to do this. I have one pc with windows home and edge on it. Had a mix of freebsd, linux and one windows image. Amazingly pain free, faster and solid. Only painful part was the time it took to convert all the virtual disks. I kind of wish I had done this sooner but I think the advent of windows admin center is what made it possible as I can build a guest over web interface just like ESXi from anywhere. I don’t do any hardware passthru so can’t speak to this or complicated guests. If like me your wondering if dumping ESXi for something else because you used the b120i softraid and are too cheap to buy a raid card to replace it I think this is the least worst solution. I was going to try proxmox if I failed miserably but it worked out fine. Found this guide helpful https://www.nakivo.com/blog/how-to-convert-vmware-vm-to-hyper-v/
  13. 2 points
    I'd take the 1265, it has hyperthreading and runs cooler. Bit more useful info here on ARK. https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=65734,65728
  14. 2 points
    It does bring many things in to question, I have gone cheap on some things, others not so much. my DVD/BR Rips that make up probably 90-95% of the data usage I do not want to have to replace but I can consequently I have a RAID 6 array, the critical data is on RAID1 as for losing a RAID on rebuild, its an old fear mongering article but it gets the point across is this one below. it explains the math behind what happens https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/ Obviously it did not/does not stop working but the efficacy of it as data protection becomes a greater and greater issue as storage devices increase in size
  15. 2 points
    Bump this topic:) So I upgrade my microserver 8 to xeon 1270v2, and decided to install active cooling I try to won on ebay Scythe Kozuti, but oops. And then read this topic and bought Noctua NH-L9i. But things about to install it on cable tire not good. So I found a guy with milling machine, and he made a "short" legs for L9i Original nocuta upside, milled - below Cooler perfectly fit on board using Noutua screws with additional spacers from tray side (hello HPE with non-standart 1155 dimensions). And don't forgot remove nuts for original heatsink from motherboard tray. Cooler connected to case fan connector with Y-splitter.
  16. 2 points
    Hi all, Here's a guide I would like to share around Windows Storage spaces and creating a 4x drive Parity pool In a nutshell I have Windows Serer 2019 and storage space parity pool running very nicely on my Gen8. Here's the configuration I used and How to copy my setup. (I still believe ZFS or UnRAID are far better choice as a filesystem on these limited servers, but if you need Windows like I do, then storage spaces can be an excellent alternative.) This is my "best effort" guide and by no means perfect. It does however yield excellent results for both read and write speeds. Gen8 Microserver 16GB RAM CPU Stock for now (1270 V3 on it's way) Disks 4x 3TB WD NAS drives in front bays SSD - Samsung Evo 850 265 First lesson, DONT use the Windows GUI to create the pool or Virtual disk as the GUI applies terrible defaults that you can't edit and will ruin performance. Also make sure you're on the latest version of Windows server as a LOT has changed and been improved recently. You must use PowerShell. Terms: PhysicalDiskRedundancy - Parity Columns - 4 (The data segments stripped to disks. Should match your 4 disks) Interleve - 256K (The amound of data written to each "column" or disk. In this case 256KB interleave gives us a 64K write to each disk) LogicalSectorSize - 4096 PhysicalSectorSize - 4096 REFS/NTFS Cluster - 64K Overall configuration: 4 drive file system, one bootable SSD in RAID mode. BIOS setup initial F9 into the BIOS and set the B120i controller into RAID mode F5 into the RAID manager and create 1 individual RAID0 logical drive for the SSD Set the SSD as the preferred boot drive (Yes in the same screen) Set the cluster size to 63 Enable caching Windows install Install Windows 2019 Server Standard GUI edition from ISO Offer up the B120i RAID drivers via a USB stick so the wizard can see the SSD RAID0 drive. Filename p033111.exe (Have them extracted) Windows update and patch and reboot BIOS setup post windows Once windows is up and running go back into the F5 RAID manager and finish the setup of the 4 front drives into 4x RAID0 Check the SSD is still set as the preferred boot drive (Yes in the same screen) Set the cluster size to 63 Windows config of storage spaces At this point you should see 4 individual drives ready to be used as a Storage pool Try to set each disk to have a cache (Not all drives support this) Win + X to open the side menu Device Manager Expand Disk Drives Right Click the "HP Logical Volume" for each drive Check - "Enable write caching on the device" (If it doesn't work don't stress, it's optional but nice to have) Powershell - Run as Admin Determine the physical drisks available for the pool we're about to create Get-PhysicalDisk | ft friendlyname, uniqueid, mediatype, size -auto Your output will look something like this, so identify the 4 drives that are the same and take note of their uniqueID Mine are the bottom four drives all 3TB in size friendlyname uniqueid size ------------ -------- ---- SSD HP LOGICAL VOLUME 600508B1001C5C7A1716CCDD5A706248 250023444480 HP LOGICAL VOLUME 600508B1001CAC8AFB32EE6C88C5530D 3000559427584 HP LOGICAL VOLUME 600508B1001C51F9E0FF399C742F83A6 3000559427584 HP LOGICAL VOLUME 600508B1001C2FA8F3E8856A2BF094A0 3000559427584 HP LOGICAL VOLUME 600508B1001CDBCE168F371E1E5AAA23 3000559427584 Rename the friendly name based on the UniqueID from above and set to "HDD type" Set-Physicaldisk -uniqueid "Your UniqueID" -newFriendlyname Disk1 -mediatype HDD You will need to run that 4 times with each UniqueID code and create a new friendly name for each drive. I called mine "Drive 1, Drive 2" etc Set-Physicaldisk -uniqueid "600508B1001C2FA8F3E8856A2BF094A0" -newFriendlyname Disk1 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CDBCE168F371E1E5AAA23" -newFriendlyname Disk2 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CAC8AFB32EE6C88C5530D" -newFriendlyname Disk3 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001C51F9E0FF399C742F83A6" -newFriendlyname Disk4 -mediatype HDD Verify the disks have been set correctly The following example shows which physical disks are available in the primordial server and CAN be used in the new Pool. You're just checking here if the friendly name renaming worked and they are all set to HDD type. Primordial just means on your local server and available. Get-StoragePool -IsPrimordial $true | Get-PhysicalDisk | Where-Object CanPool -eq $True You should see your four drives with nice names that you set like "Disk1" Now find out your sub system name, as we need this for the next command. Just take note of it. Example "Windows Storage on <servername>" Mine is ""Windows Storage on Radaxian" Get-StorageSubSystem The following example creates a new storage pool named "Pool1" that uses all available disks and sets the cluster size. New-StoragePool -FriendlyName Pool1 -StorageSubsystemFriendlyName "Windows Storage on Radaxian" -PhysicalDisks (Get-PhysicalDisk -CanPool $True) -LogicalSectorSizeDefault 64KB Now create the Virtual Disk on the new pool with 4x disks and Partity set correctly. (This is critical to do via PowerShell) New-VirtualDisk -StoragePoolFriendlyName "Pool1" -FriendlyName "VDisk1" -ResiliencySettingName Parity -NumberOfDataCopies 1 -NumberOfColumns 4 -ProvisioningType Fixed -Interleave 256KB -UseMaximumSize Those two commands should complete without error, if they don't go back and check your syntax Go back into the Windows GUI and open this Server Manager\File and Storage Services\Servers You should see the Storage pool listed and the Virtual disk we created in the previous steps. Storage pool - Pool1 Virtual Disk - VDisk1 Select Disks in the GUI Identify your new VDisk1 and right click it. Set to Online, this will also set it to use a GPT boot record On the same screen in the below pane Volumes Click TASKS and select "New Volume" Select REFS and Sector size of 64K Enter a volume name like "Volume1" or whatever you want to call it Select a drive letter such as Z (You can use NTFS here for slightly better performance, but I'm sticking to REFS as it has some benefits) You'll now have a Storage pool, Virtual disk on top and a volume created with optimal settings Go back into Power Shell Enable power protected status if applicable (Just try it, no harm) (Ideally here you should have your server connected to a basic UPS to protect it from power outages) Set-StoragePool -FriendlyName Pool1 -IsPowerProtected $True Check if the new sector sizes of Virtual disk and all relevant settings are correct Get-VirtualDisk | ft FriendlyName, ResiliencySettingName, NumberOfColumns, Interleave, PhysicalDiskRedundancy, LogicalSectorSize, PhysicalSectorSize Example output FriendlyName ResiliencySettingName NumberOfColumns Interleave PhysicalDiskRedundancy LogicalSectorSize PhysicalSectorSize VDisk1 Parity 4 262144 1 4096 4096 You're done.... enjoy the new Volume. At this point you can share out your new Volume "Z" and allow client computers to connect. Some other commands in Power Shell that I found useful Get more verbose disk details around sectors. Get-VirtualDisk -friendlyname Vdisk1 | fl Get-PhysicalDisk | select FriendlyName, Manufacturer, Model, PhysicalSectorSize, LogicalSectorSize | ft Check if TRIM is enabled. This output should be 0 fsutil behavior query DisableDeleteNotify If TRIM is not enabled, you can set it on with these commands fsutil behavior set disabledeletenotify ReFS 0 fsutil behavior set disabledeletenotify NTFS 0 Check the Power Protected status and cache Get-StorageAdvancedProperty -PhysicalDisk (Get-PhysicalDisk)[0] Once your data has been migrated back to your new pool from backup, make sure you run this command to "spread out the data" properly. This command rebalances the Spaces allocation for all of the Spaces in the pool named SQLPool. Optimize-StoragePool -FriendlyName "Pool1" I'm yet to get my Xeon in the mail, but once that's installed I think the disk performance will go up even higher as the stock CPU is junk.
  17. 2 points
    Lots of data in this article: https://www.servethehome.com/surreptitiously-swapping-smr-into-hard-drives-must-end/
  18. 2 points
    They didn't say anything about resilvering... which puts any drive that's being added to an already utilized RAID volume.into a relentless write task that consequently puts a strain on any SMR drive. It doesn't matter if it's in a datacenter or a home NAS scenario.... resilvering works the same way and puts a lot of stress to the drives in the volume. This is classic bait 'n switch, WD. Not good.
  19. 2 points
    Red NAS Drives SMR versus CMR I've investigated this a bit further and came up with the following conclusions. Western Digital is not transparent with any of this information - so this is based just on what I found 1. The WDx0EFRX drives appear to be the older model - I purchased WD Reds in 2013 and they match the 2013 datasheet. Ditto for some 3TB Reds I bought in 2016. As recently as the 2018 datasheet, WD listed WD40EFRX drives in their NAS datasheet. However, this was the first appearance of the WDx0EFAX drives in 10 and 12 gig sizes 2. Their latest datasheet, published in December 2019, lists both WDx0EFRX and WDxEFAX models for Reds. Interesting differences in Cache and speed listed between the two without explanation. 3. Amazon and others still have WDx0EFRX and WDx0EFAX drives listed separately, I purchased a "spare" WD Red over the weekend - it arrived today and is a WDe0EFRX model. 4. Qnap has a hardware compatibility list - My NAS, QNAP TS451, does not list WDx0EFAX as a compatible drive. It does have WDx0ERX spelled out. 5. On the Synology compatiblity list - the WD60EFAX and the WD20EFAX are listed as SMR Drives The following is not verfied - but was mentioned in the QNAP and Synology Forums. The WDx0EFAX drives may have been modified thru cache to give SMR drives better compatiblity with RAID. here is a link to the datasheets I've found https://drive.google.com/drive/folders/1EcjO5Pih7BilAshWhYcxbG6pFTwWWAOj?usp=sharing
  20. 2 points
    The spec sheet for the Dell XPS 8700 shows that the SATA ports are SATA 3.0. https://downloads.dell.com/manuals/all-products/esuprt_desktop/esuprt_xps_desktop/xps-8700_reference guide_en-us.pdf The 860 EVO is a very good drive. You may need a 2.5" to 3.5" drive adapter to mount it. https://downloads.dell.com/manuals/all-products/esuprt_desktop/esuprt_xps_desktop/xps-8700_owner's manual_en-us.pdf
  21. 2 points
    https://www.woot.com/plus/microsoft-surface-books-surface-pro-4-tablets?ref=w_cnt_gw_dly_wobtn
  22. 2 points
    Good overview: https://www.servethehome.com/hpe-proliant-microserver-gen10-plus-v-gen10-hardware-overview/
  23. 2 points
    iLO 2.73 released https://support.hpe.com/hpsc/swd/public/detail?swItemId=MTX_ba3437a6c8d843f39ab5cace06 UPGRADE REQUIREMENTS: OPTIONAL ***ATTENTION*** Note for ESXi users: If you are booted from the Embedded SD Card, it is strongly recommended that you reboot the server immediately after updating the iLO firmware. FIRMWARE DEPENDENCY: Hewlett Packard Enterprise recommends the following or greater versions of iLO utilities for best performance: - RESTful Interface Tool (iLOREST) 2.3 - HPQLOCFG v5.2 - Lights-Out XML Scripting Sample bundle 5.10.0 - HPONCFG Windows 5.3.0 - HPONCFG Linux 5.4.0 - LOCFG v5.10.0 - HPLOMIG 5.2.0 KNOWN ISSUES: - Fibre Channel Ports are displayed with degraded status if they are configured but not attached. FIXES: The following issues are resolved in this version: - Added fix for Embedded Remote Support in an IPv6-only environment. - Added fix for Embedded Remote Support data collection for systems with multiple Smart Array Controllers. Enhancements: - Suppress SNMP traps for NIC link up/link down events that occur during POST.
  24. 2 points
    In theory It should work with a SATA m.2 disk yes as it uses a SATA controller what I assume @schoondoggy was referring to is a HBA adapter that allows for actual SATA disks to be added
  25. 2 points
    I had the same problem. The solution that worked for me was to change the Power Regular Settings to "OS Control Mode" in ILO. Hope this helps.
  26. 2 points
    I want a video, then when it all goes tits up he can at least upload it to Youtube
  27. 2 points
    I am interested in (old style): - motherboards/NIC's for 10GBE (especially with AMD CPUs) - computer cases for lots of disk, say 12+ … and anything in the 'Home and Family' topic series. Have a nice show! 🙂 PS trying to update my next door neighbour's 2009 laptop (Windows 7, HDD) to Windows 10 as I write.
  28. 2 points
    Use a VPN if possible, there are quite a few vulnerabilities in the RDP stack so we have closed it for clients unless they specifically sign a waiver
  29. 2 points
    Hey Dave, A few things that might help and others: Haven't used the UDM yet (I'm waiting for the UDM Pro which is still in final beta), but... My understanding is that you can restore a CloudKey controller backup to the UDM built-in CloudKey. Personally, in your configuration, I wouldn't physically reconfigure and move coax feeds and equipment. I would install the UDM in the basement replacing existing gear with just a simple cable swap. Sure, you're wasting the built-in AP, but everything else is much more straightforward. Theoretically, you should be able to restore your cloudkey backup, and have almost the same network up and running in just a few minutes. Then you can start deconstructing or reconfiguring more at your leisure rather than necessity of getting the network up and running for the entire household with no downtime 🙂 In your review of your existing setup, IMHO, the primary benefit of Unifi, even more than the wide choice of physical AP units and mounting options, is the extensive configurability and monitoring/status options. You kinda touched at this towards the end of the podcast, but the ability to limit the radio power, turn off the auto settings, and assign the Wi-Fi channels (especially the crowded 2.4 GHz frequency) to non-overlapped channel numbers is a big win for anyone trying to fix dead spots or avoid buying extra AP's as a "brute force" solution to solving coverage. (Not that there is anything wrong with that; sometimes spending $100 on an extra AP instead of spending hundreds of dollars of time and effort to tweak, is the right choice.) It wasn't clear that you are full exploiting the Unifi flexibility to fix your Ring camera/doorbell problems. First thing I usually do with a Unifi setup is to create a 2.4 GHz only SSID and enable it only on the AP radio that is physically the right unit for the Doorbells (or any IoT device that only supports 2.4GHz) to connect. Overriding the autoconnect/automatic behavior in Ring and other devices and forcing the connection to a specific AP solves almost all the Wi-Fi problems with these and similar devices that have somewhat dumb Wi-Fi firmware or less than ideal reliability. It's worth the trouble to re-program the SSID inside the Ring or other device and the results are much better than just having multiple AP's hoping they are in range. I'm really curious whether the UDM will be successful in bringing Unifi to the general consumer market, but I'm skeptical it will really be able to displace Eero, Google, Orbi, and other true consumer gear. One irony is that right now the early adopters of the UDM are all sophisticated Unifi users and that thing doesn't fit and looks awful in their otherwise beautiful rack porn photos they have been posting 🙂 Granted the UDM is a lot cheaper than buying the equivalent individual parts, but there are advantages to being modular too. Easier service, not losing everything if a non-critical module goes down, etc. There will always be a lively discussion between modular or integrated that goes all the way back to mainframes with terminals versus minicomputers and later PC's, so not trying to re-ignite that long standing debate, but merely point out that saving money isn't always the most significant reason to choose one over another. In the case of Unifi, both fans and users are primarily looking for new functionality. Personally, I would prefer to see some new capabilities made available, regardless of whether it is all-in-one or requires a new box. I can work around price and modularity issues, but I can't work around the lack of a critical feature. So, to bring this home, the only feature that UDM provides that doesn't exist in the current gear is the new USG router/firewall. Specifically, the UDM is rated to handle 1 Gbps speeds with full hardware speed packet analysis and intrusion processing. The current USG is only able to handles 100mbps and is severely taxed in performance at that speed. This is significant because consumer fiber and high speed home Internet connections have zoomed from 3 mbps to over 1 Gbps in many urban and metropolitan areas. Since you mentioned you don't have a USG in your current setup, I think you aren't in a good position to really understand the difference provided by the UDM versus the existing Unifi gear. I know some Unifi users prefer to use a separate router or the Ubiquiti EdgeRouter products because of these limitations and thus don't have the integrated management provided by using the USG. On a positive note, the UDM finally removes the insecure PPTP VPN protocol, but has not yet added support for OpenVPN for incoming VPN (to connect back to your home when you are away, or to use your home network as your own private VPN Internet gateway instead of a paid service), and that is a bit disappointing.
  30. 2 points
    was looking for a youtube video on Qnap's Qvpn product and right up there on the top search results was a Home Server show legend - Mike Faucher aka PCDoc He has been relatively active recently on his YouTube channel and from what I can see has been posting useful and interesting content of interest to Reset users. Much like he was back in the halcyon days of the HomeServer show - He was our "Joe Friday" - "Just the facts mam" The QVPN video was exactly what I was looking for search for his name on youtube here is link to his website. https://thedocsworld.net/
  31. 2 points
    Random thoughts on some of the Smart Array controllers. Used P430 and P440 controllers have dropped into the $100-$135 range. These controllers also appear to work well in HPE UEFI based systems. For HPE servers with backplanes with female SFF-8087 connections, cabling options are fairly easy to come by. I have been playing with a P430 in the MicroServer Gen8 and Gen10. I had a female to female SFF-8087 connector to make the cabling work, but they are expensive $60. The P440 is a good choice for the ML30 Gen9 as the server board supports the battery connection. I will share pictures as soon as the forum will let me upload.
  32. 2 points
    Windows download link https://support.hpe.com/hpsc/swd/public/detail?swItemId=MTX_71b9ad7e388d434fb62f7542e3 Entitlement is *not* required, despite what the page says. To update without booting from iso, or from OS, you can extract the CPQJ0613.684 file from the exe with 7-zip and upload it to the ILO firmware update page instead. A reboot is required to activate or display as the new version
  33. 2 points
    The problem was infact the power management! PEte_79 is a god!
  34. 2 points
    Just in case anyone has this problem, the solution was to install the latest Broadcom drivers - I used this https://www.dell.com/support/home/us/en/19/drivers/driversdetails?driverid=cn7mv&oscode=ws19l&productcode=poweredge-r430 and I am now down to around 2-3% cpu at idle. No idea why the Broadcom NX1 driver (cp031155) from the hpe site had such high cpu. -Jim
  35. 2 points
    I have an X3421 Gen10 Microserver w/ stock 8GB RAM running Windows Server 2019 Eval. I did the standard install, then added HyperV and container support. This isn't part of a domain - nothing is set up beyond the initial install and normal Windows updates. The boot disk is an MX500 SSD attached to SATA5 with 2 4TB 3.5" drives in the cage. I was seeing ~20% CPU utilization for the SYSTEM process and ~22% for SYSTEM INTERRUPTS. I tracked that down to the vEthernet device. When I uninstalled HyperV and removed the vEthernet device it dropped down to ~12.5% CPU for SYSTEM and ~2.5% CPU for SYSTEM INTERRUPTS. Note that this is looking at the task manager performance tab. If I right click on SYSTEM and select Go To Details it shows SYSTEM taking about 6% CPU and SYSTEM INTERRUPTS taking about 2%. I don't know what causes this discrepancy. Running LatencyMon (for about a 30 second run) shows Highest measused time: 343 Highest reported ISR time: 33.8 (storport.sys) Highest reported DPC time: 212.3 (ndis.sys) Total hard page fault: 26 I'm not sure how to track down the source of the SYSTEM process CPU usage. I don't see anything unusual in the event logs. Truthfully I'm more concerned about the SYSTEM CPU usage, but both seem high to me for a server that is basically just sitting there. Is there something strange going on or is this just the cost of running Server 2019 on a relatively low performance CPU? This server is replacing Home Server 2011 on a MediaSmart EX495 and idle CPU on that was about 3-5%. I have installed bios ZA10A360, chipset driver WS2012R2_W8_1, AMD Chipset Graphic Driver 17.1.1, and the Broadcom NX1 driver (cp031155) - all from the HP site. Not sure what else to try. Is this normal for Server 2019 with no load? Thanks in advance for any advice, -Jim
  36. 2 points
    Everyone has access to HPE support for drivers. BIOS downloads require a warranty or service contract. I think the Server 2016 driver will work https://support.hpe.com/hpsc/swd/public/detail?swItemId=MTX_4360dc484acb4f8badf3d9ea42
  37. 2 points
  38. 2 points
    New rule in place: Members with less than 10 posts may not place links in posts. These posts will go to moderation.
  39. 2 points
    Mirror the files on your Mac to your server and save $9.99. I have been charged $9.99 for my server and it is not so horrible that I will start fresh elsewhere. I have 2+ TB at crashplan and brief research shows that it won't be cheaper elsewhere. I agree that it is sad that the $2.49/month honeymoon period is over.
  40. 2 points
    My server rack in the basement is a hot mess. I decided to fix it up a bit and am splitting it into a few phases. I just finished phase one and thought I would show you. Here is the problem. Rack of wiring installed by someone else. Coax and voice wiring are in a sub rack to the side. I've "borrowed" a few of the voice CAT5e lines to use as Ethernet. My first plan was keep the sub rack and run a cable ladder back and forth but I decided to run a few jumpers up the wall and come in behind the rack instead. That allows me to dump the side rack and leave it as it was when I first saw it. You can see in the photos that I also needed a wire rack on the side to hold some gear. I used to have a pfSense box and some mining rigs as well. Goal is to get it all in the 19" rack. One thing that makes it difficult is I have so many little boxes. HDHomeRun Ooma VOIP box Echo Connect - Hooks the VOIP line into Alexas. Fire TV Recast Netgear Cable Modem Synology RT2600ac Router DS415+ DS218+ Then I have the D-Link 24 port switch and somewhere around here I need to put in a POE switch for the Ubiquiti AP's so they will all be on battery backup. All these little boxes have to go somewhere. Here is the side rack. At this point the cable coming in is jumpered to my great room where the Synology router and cable modem are. I did this in order to use the Synology as the wireless AP. This is the panel I steal CAT5e connections to certain walls too. Also, the voice system for the house is here. I have to inject the Ooma box into the house so we can use normal handsets and answering machine. Here is the hot mess. Speaker wiring mixed with CAT5e. The amp on bottom hooks up to an Echo Dot to do whole home audio with Alexa. The UPS is on the wire rack as well as the NAS's. The HDHR is at the top which is out of frame. Here is phase one complete. I still need to power it all down and get the UPS moved. The HDHR as well. Side rack. I ran an extra CAT5e jumper and it's curled up in front. I may have to rethink this as the jumper on the top right is bouncing from gig to 100mbps. I may have to pull these jumpers all the way back and into the main rack and re-punch them down. My concern would be them reaching. Here is the new setup. On the top are all the odd shaped boxes. Conect, Recast, Ooma, then below is Netgear Cable Modem and Synology Router. (More on this below) Wiring section. Rooms on the left, boxes on the right. Speaker wiring. NAS section. Phase 2 will be powering it all back down and fixing up some wiring in the back. Power on one side, data runs on the other. Move the UPS, move the HDHR. Also, I may move from the Synology Router to Untangle. If so, I'm going to rack mount a PC and use the Synology router as a spare. I have the Synology wireless on. It's completely surrounded by metal though. Not far from this rack are two heater/AC units and tons of ducting on all sides. That's why wi-fi access points in this room are a waste. The only clients are in this actual room. I have wireless on right now and see if it will work ok with the Ubiquiti system I have on the floors above it. It's not a supermodel but I'm liking it so much better! Suggestions and comments welcome. I used to have a MacBook setup at all times with Ethernet. That way when I came downstairs I had something to do some console work with. I may set that back up at some point.
  41. 2 points
    Depending on the area I am trying to cover, mounting a WAP high on a wall can sometimes be easier, mounting a box and running cable, than ceiling mounting.
  42. 2 points
    This is the latest SPP to download. The service packs no longer contain Gen8 updates, so HP releases these "post-production service packs." Funnily enough the 8.1 pack freezes on analyzing system when it runs on both my Microserver Gen8 and DL360p Gen8. So I've manually updated in Windows. http://h17007.www1.hpe.com/us/en/enterprise/servers/products/service_pack/spp/index.aspx?version=Gen8.1 Use this link, click on the appropriate OS you have and then sort by date. https://support.hpe.com/hpesc/public/home/driverHome?sp4ts.oid=5390291 My own notes have my server at the following, running Windows Server 2019 Standard. I don't have any "entitlement" but I've been able to download all these updates. Firmware: BIOS: J06 (May 21, 2018) iLo: v2.61 (July 27, 2018) Intelligent Provisioning: v1.64.0.0 (March 2, 2017) RAID Controller HP B120i: v3.54.0 RAID Controller HP P222: v8.32c (Nov 27, 2018) SATA Controller: v0.84 Drivers: Matrox G200eH Video: v9.15.218 (Nov 27, 2018) SAS/SATA Event Notification Service: v.6.46.0.64 (Oct 24, 2016) iLO4 Channel Interface Driver: v3.31.0.0 (Jun 26, 2018)
  43. 2 points
    LOL, I know I am digging old dirt here but I am in the exact same situation as the OP. My V1 server has been chugging along for over 10 years total now. Seven years since I rebuilt it on new hardware. Just got a 918+ from Adorama this week. Working on it now. The only thing I am likely not migrating to the synology is my weather station. I think I will put that on a pi.
  44. 2 points
    Probably a historical relic? UniFi doesn't use Java at all. It isn't designed as a standalone a la carte system (buying just one or a few AP's). The key architectural benefit of UniFi is that is uses an Enterprise (some may say "Enterprise-like") centralized management model using an integrated web console to manage all their devices. From a "single pane of glass" you can manage Unifi Switches, Access Points, Security Gateway/Router, etc. Although individual products come close to "best of breed" on their own merits (and most exceed the typical consumer products like Eero, Orbi, etc.) I wouldn't suggest anyone try UniFi unless they are interested in a unified view of their network and/or plan on using at least some of their gear beyond simple AP's. Honestly, the systems to compare it to are Cisco, Open Mesh, Ruckus, Meraki, etc. In that league of product, UniFi is the only one that is widely available to consumers, doesn't require restrictive dealer training/authorization, and be purchased one piece at a time. In the consumer space, it is really suitable for the power user / hacker /expert that hates hardware that hides or limits features, considers the user "stupid" that needs to be protected from themselves, etc. Whereas with a Cisco or traditional gear you are going to have learn a cryptic, proprietary command line language and an entire philosophy of operation to configure and get the most out of the gear, Unifi can be completely installed and operated by very straightforward mobile apps or a nice full-size web browser. Uber users can SSH to any UniFi gear and have direct command line access to a real Unix/Linux kernel - if you know what you are doing, you can install your own modules and reconfigure anything - even things not exposed in the regular GUI web interface. Having said that, many of their products can be used as "dumb" hardware devices if you don't actually configure them (e.g. they have a nice line of 8-port PoE Smart Ethernet switches, but if you just plug them in and use them they work fine as "dumb" switches.) In particular, the AP's by themselves can be set up in "standalone mode" with a few clicks of their smartphone app. You don't get all the benefits of the centralized management system, but you can do it (I say don't; stick with Eero or other things if you aren't going to use real capability of UniFi).
  45. 2 points
    I need to pack away the 2.5" drives, but I do have 10 drives running in the MS Gen10. I have Windows 10 Pro running on a M.2 Crucial SATA SSD and a Toshiba M.2 NVMe PCIe mounted on the top PCIe card in the picture below: The bottom card is a Marvell based 4 port SATA PCIe x1 card, that is running four 2.5" drives. When buying cards for a MS Gen10 be sure they are not taller than the bracket. Here are the cards I used: https://www.amazon.com/gp/product/B07BNWFFNK/ref=oh_aui_detailpage_o00_s00?ie=UTF8&amp;psc=1 https://www.amazon.com/gp/product/B01464550K/ref=oh_aui_detailpage_o00_s01?ie=UTF8&amp;psc=1
  46. 2 points
    How did you get em all packed in there? Still working on it. I could stack two on a side bracket, but I want to leave that open for a 15mm 2.5" drive. Now that the 2TB Seagates come in 7mm 2.5", I think I can fit four in the ODD bay with two stacks of two drives. Cabling is always an issue.
  47. 2 points
    Hey, I know that guy.
  48. 2 points
    Sorry, I should have asked for a picture sooner. That module is registered memory. The chip in the middle is the register/latch chip: The part number, OWC1333D3ECC8GB, is a UDIMM, but the DIMM they sent you is a RDIMM. They have mislabled the part.
This leaderboard is set to Indiana - Indianapolis/GMT-04:00
×
×
  • Create New...