Jump to content
RESET Forums (homeservershow.com)


  1. schoondoggy



    • Points


    • Content Count


  2. Trig0r



    • Points


    • Content Count


  3. Al_Borges



    • Points


    • Content Count


  4. Dave



    • Points


    • Content Count


Popular Content

Showing content with the highest reputation since 07/15/2019 in all areas

  1. 5 points
    Ok, it's time to solve all problems with overheating. Stuff: 1x Noctua NF-A8-5V fan with USB power adaptor cable 1x Akasa 3D Fan Guard AK-FG08-SL https://imgur.com/a/7npqgdc Now LSI 9271-4i temperature approximately no more than 57 degrees.
  2. 5 points
    As I do not plan to build any more of the SDM Rev4 brackets, I have decided to share the schematic. https://1drv.ms/u/s!Ao9ObX1BCpuXsstkeE7Jc5cyRmYTFQ?e=8ouDcP Edit- This is a Visio file. If you download it and open it Visio you will see the scale of the drawing.
  3. 3 points
    Thanks for the reply. It turns out I’d bent a few pins on the CPU socket. God only knows how!! I managed to carefully bend the pins back and I’ve managed to boot up with RAM slots 1 and 2 populated. All seems to be running ok for now. Spot the bent pins in the pic 😬
  4. 3 points
    Are you apologetic because you are truly sorry or that you got caught? - Said every Mom on the planet.
  5. 3 points
    Thanks for guidance on this! Just a note on the solution I settled with in the end for anyone else who might come across this thread looking to do the same thing - Tried a couple different cheaper eSATA cards with no success having the drives in ICYCube detected in CentOS. Looked into driver issues and again with no luck. The USB connection did work fine, but I wasn't overly happy with that as a solution. I instead got this HBA (10Gtek Host Bus Adapter - https://www.amazon.co.uk/gp/product/B01M9GRAUM) with Mini SAS connection, and swapped the ICYCube for an alternative external enclosure with compatible Mini SAS connection (SilverStone SST-TS431S-V2 - https://www.amazon.co.uk/gp/product/B0771S45X3). On setup this worked perfectly, with the drives properly detected. I'm actually also a little happier with the SilverStone unit in general so far, it feels of higher build quality and fan also seems a little quieter. I also decided to move away from using the B120i RAID controller, and hardware raid in general, and am now using ZFS / RAIDZ. Thanks again for help as I was working through this!
  6. 3 points
    It's not a server unless it sounds like a wind tunnel
  7. 2 points
    Where else can you compliment someone on their "nice rack" these days
  8. 2 points


    No it is not. That is registered memory. This server does not support registered memory.
  9. 2 points
    Yes, for me the RAM was the limiting factor as well. I moved onto a DL360 Gen8. Issue with the MS is even if you can fit a mini-ITX board into the case with another low-profile CPU fan, you'll have to do some heavy modding to the I/O area as it's not removable. I tried to do something similar with the HP Z820 workstation because the case is really quite nice. Unfortunately the amount of modding required was beyond my skill and the effort didn't seem to be worth it.
  10. 2 points
    TDP of 69W is very high, even with the 65W replacement cooler from HP that costs a fortune, so I personally wouldn't use that CPU.
  11. 2 points
    It appears that should work, as it is unbuffered and ECC The MS Gen8 does not require HP branded memory.
  12. 2 points
    Sorted it. I created a UEFI boot USB and it worked.
  13. 2 points
    I have run four consumer grade SSD on it.
  14. 2 points
    Had some time on my hands and I’ve been running ESXi since I first got the microserver and upgrading until 6.5 update 2 with all the faff from downgrading drivers. I saw a path to migrate all my guests to hyper-v with Microsoft VM migration tool. So long story short, HP b120i raid driver with windows server 2019 hyper-v image works. Was a bit concerned as only listed for Server 2016. Installed openssh packages and windows admin center as I don’t use enterprise/pro windows at home. Even put on the windows subsystem for Linux so from command line I can be in a more familiar shell. Not a powershell user for the most part. I shutdown all guests and copied them off somewhere safe. Biggest VM disk was 1TB. Used pigz to speed up compression when I tar them all up, the 1TB was actually mostly allocation, Not data, which shrinks massively when compressed. linux subsystem was very handy to move back and expand after wiped the disks with windows install. All images needed connecting to new virtual router and reverted to DHCP on NIC and had to be readdressed where I had statics. Windows Admin Center allowed RDP to consoles for all to do this. I have one pc with windows home and edge on it. Had a mix of freebsd, linux and one windows image. Amazingly pain free, faster and solid. Only painful part was the time it took to convert all the virtual disks. I kind of wish I had done this sooner but I think the advent of windows admin center is what made it possible as I can build a guest over web interface just like ESXi from anywhere. I don’t do any hardware passthru so can’t speak to this or complicated guests. If like me your wondering if dumping ESXi for something else because you used the b120i softraid and are too cheap to buy a raid card to replace it I think this is the least worst solution. I was going to try proxmox if I failed miserably but it worked out fine. Found this guide helpful https://www.nakivo.com/blog/how-to-convert-vmware-vm-to-hyper-v/
  15. 2 points
    I'd take the 1265, it has hyperthreading and runs cooler. Bit more useful info here on ARK. https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=65734,65728
  16. 2 points
    as long as it doesn't block airflow in the case you will be fine. I've attached SSD's with double sided tape to inside and lived
  17. 2 points
    It does bring many things in to question, I have gone cheap on some things, others not so much. my DVD/BR Rips that make up probably 90-95% of the data usage I do not want to have to replace but I can consequently I have a RAID 6 array, the critical data is on RAID1 as for losing a RAID on rebuild, its an old fear mongering article but it gets the point across is this one below. it explains the math behind what happens https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/ Obviously it did not/does not stop working but the efficacy of it as data protection becomes a greater and greater issue as storage devices increase in size
  18. 2 points
    Bump this topic:) So I upgrade my microserver 8 to xeon 1270v2, and decided to install active cooling I try to won on ebay Scythe Kozuti, but oops. And then read this topic and bought Noctua NH-L9i. But things about to install it on cable tire not good. So I found a guy with milling machine, and he made a "short" legs for L9i Original nocuta upside, milled - below Cooler perfectly fit on board using Noutua screws with additional spacers from tray side (hello HPE with non-standart 1155 dimensions). And don't forgot remove nuts for original heatsink from motherboard tray. Cooler connected to case fan connector with Y-splitter.
  19. 2 points
    Yeah lets be fair, you say to their credit, if they hadn't been caught out then we'd all still be in the dark, they didn't do this through choice, someone somewhere at WD made the call not to inform the customer.
  20. 2 points
    Hi all, Here's a guide I would like to share around Windows Storage spaces and creating a 4x drive Parity pool In a nutshell I have Windows Serer 2019 and storage space parity pool running very nicely on my Gen8. Here's the configuration I used and How to copy my setup. (I still believe ZFS or UnRAID are far better choice as a filesystem on these limited servers, but if you need Windows like I do, then storage spaces can be an excellent alternative.) This is my "best effort" guide and by no means perfect. It does however yield excellent results for both read and write speeds. Gen8 Microserver 16GB RAM CPU Stock for now (1270 V3 on it's way) Disks 4x 3TB WD NAS drives in front bays SSD - Samsung Evo 850 265 First lesson, DONT use the Windows GUI to create the pool or Virtual disk as the GUI applies terrible defaults that you can't edit and will ruin performance. Also make sure you're on the latest version of Windows server as a LOT has changed and been improved recently. You must use PowerShell. Terms: PhysicalDiskRedundancy - Parity Columns - 4 (The data segments stripped to disks. Should match your 4 disks) Interleve - 256K (The amound of data written to each "column" or disk. In this case 256KB interleave gives us a 64K write to each disk) LogicalSectorSize - 4096 PhysicalSectorSize - 4096 REFS/NTFS Cluster - 64K Overall configuration: 4 drive file system, one bootable SSD in RAID mode. BIOS setup initial F9 into the BIOS and set the B120i controller into RAID mode F5 into the RAID manager and create 1 individual RAID0 logical drive for the SSD Set the SSD as the preferred boot drive (Yes in the same screen) Set the cluster size to 63 Enable caching Windows install Install Windows 2019 Server Standard GUI edition from ISO Offer up the B120i RAID drivers via a USB stick so the wizard can see the SSD RAID0 drive. Filename p033111.exe (Have them extracted) Windows update and patch and reboot BIOS setup post windows Once windows is up and running go back into the F5 RAID manager and finish the setup of the 4 front drives into 4x RAID0 Check the SSD is still set as the preferred boot drive (Yes in the same screen) Set the cluster size to 63 Windows config of storage spaces At this point you should see 4 individual drives ready to be used as a Storage pool Try to set each disk to have a cache (Not all drives support this) Win + X to open the side menu Device Manager Expand Disk Drives Right Click the "HP Logical Volume" for each drive Check - "Enable write caching on the device" (If it doesn't work don't stress, it's optional but nice to have) Powershell - Run as Admin Determine the physical drisks available for the pool we're about to create Get-PhysicalDisk | ft friendlyname, uniqueid, mediatype, size -auto Your output will look something like this, so identify the 4 drives that are the same and take note of their uniqueID Mine are the bottom four drives all 3TB in size friendlyname uniqueid size ------------ -------- ---- SSD HP LOGICAL VOLUME 600508B1001C5C7A1716CCDD5A706248 250023444480 HP LOGICAL VOLUME 600508B1001CAC8AFB32EE6C88C5530D 3000559427584 HP LOGICAL VOLUME 600508B1001C51F9E0FF399C742F83A6 3000559427584 HP LOGICAL VOLUME 600508B1001C2FA8F3E8856A2BF094A0 3000559427584 HP LOGICAL VOLUME 600508B1001CDBCE168F371E1E5AAA23 3000559427584 Rename the friendly name based on the UniqueID from above and set to "HDD type" Set-Physicaldisk -uniqueid "Your UniqueID" -newFriendlyname Disk1 -mediatype HDD You will need to run that 4 times with each UniqueID code and create a new friendly name for each drive. I called mine "Drive 1, Drive 2" etc Set-Physicaldisk -uniqueid "600508B1001C2FA8F3E8856A2BF094A0" -newFriendlyname Disk1 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CDBCE168F371E1E5AAA23" -newFriendlyname Disk2 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CAC8AFB32EE6C88C5530D" -newFriendlyname Disk3 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001C51F9E0FF399C742F83A6" -newFriendlyname Disk4 -mediatype HDD Verify the disks have been set correctly The following example shows which physical disks are available in the primordial server and CAN be used in the new Pool. You're just checking here if the friendly name renaming worked and they are all set to HDD type. Primordial just means on your local server and available. Get-StoragePool -IsPrimordial $true | Get-PhysicalDisk | Where-Object CanPool -eq $True You should see your four drives with nice names that you set like "Disk1" Now find out your sub system name, as we need this for the next command. Just take note of it. Example "Windows Storage on <servername>" Mine is ""Windows Storage on Radaxian" Get-StorageSubSystem The following example creates a new storage pool named "Pool1" that uses all available disks and sets the cluster size. New-StoragePool -FriendlyName Pool1 -StorageSubsystemFriendlyName "Windows Storage on Radaxian" -PhysicalDisks (Get-PhysicalDisk -CanPool $True) -LogicalSectorSizeDefault 64KB Now create the Virtual Disk on the new pool with 4x disks and Partity set correctly. (This is critical to do via PowerShell) New-VirtualDisk -StoragePoolFriendlyName "Pool1" -FriendlyName "VDisk1" -ResiliencySettingName Parity -NumberOfDataCopies 1 -NumberOfColumns 4 -ProvisioningType Fixed -Interleave 256KB -UseMaximumSize Those two commands should complete without error, if they don't go back and check your syntax Go back into the Windows GUI and open this Server Manager\File and Storage Services\Servers You should see the Storage pool listed and the Virtual disk we created in the previous steps. Storage pool - Pool1 Virtual Disk - VDisk1 Select Disks in the GUI Identify your new VDisk1 and right click it. Set to Online, this will also set it to use a GPT boot record On the same screen in the below pane Volumes Click TASKS and select "New Volume" Select REFS and Sector size of 64K Enter a volume name like "Volume1" or whatever you want to call it Select a drive letter such as Z (You can use NTFS here for slightly better performance, but I'm sticking to REFS as it has some benefits) You'll now have a Storage pool, Virtual disk on top and a volume created with optimal settings Go back into Power Shell Enable power protected status if applicable (Just try it, no harm) (Ideally here you should have your server connected to a basic UPS to protect it from power outages) Set-StoragePool -FriendlyName Pool1 -IsPowerProtected $True Check if the new sector sizes of Virtual disk and all relevant settings are correct Get-VirtualDisk | ft FriendlyName, ResiliencySettingName, NumberOfColumns, Interleave, PhysicalDiskRedundancy, LogicalSectorSize, PhysicalSectorSize Example output FriendlyName ResiliencySettingName NumberOfColumns Interleave PhysicalDiskRedundancy LogicalSectorSize PhysicalSectorSize VDisk1 Parity 4 262144 1 4096 4096 You're done.... enjoy the new Volume. At this point you can share out your new Volume "Z" and allow client computers to connect. Some other commands in Power Shell that I found useful Get more verbose disk details around sectors. Get-VirtualDisk -friendlyname Vdisk1 | fl Get-PhysicalDisk | select FriendlyName, Manufacturer, Model, PhysicalSectorSize, LogicalSectorSize | ft Check if TRIM is enabled. This output should be 0 fsutil behavior query DisableDeleteNotify If TRIM is not enabled, you can set it on with these commands fsutil behavior set disabledeletenotify ReFS 0 fsutil behavior set disabledeletenotify NTFS 0 Check the Power Protected status and cache Get-StorageAdvancedProperty -PhysicalDisk (Get-PhysicalDisk)[0] Once your data has been migrated back to your new pool from backup, make sure you run this command to "spread out the data" properly. This command rebalances the Spaces allocation for all of the Spaces in the pool named SQLPool. Optimize-StoragePool -FriendlyName "Pool1" I'm yet to get my Xeon in the mail, but once that's installed I think the disk performance will go up even higher as the stock CPU is junk.
  21. 2 points
    Lots of data in this article: https://www.servethehome.com/surreptitiously-swapping-smr-into-hard-drives-must-end/
  22. 2 points
    They didn't say anything about resilvering... which puts any drive that's being added to an already utilized RAID volume.into a relentless write task that consequently puts a strain on any SMR drive. It doesn't matter if it's in a datacenter or a home NAS scenario.... resilvering works the same way and puts a lot of stress to the drives in the volume. This is classic bait 'n switch, WD. Not good.
  23. 2 points
    Red NAS Drives SMR versus CMR I've investigated this a bit further and came up with the following conclusions. Western Digital is not transparent with any of this information - so this is based just on what I found 1. The WDx0EFRX drives appear to be the older model - I purchased WD Reds in 2013 and they match the 2013 datasheet. Ditto for some 3TB Reds I bought in 2016. As recently as the 2018 datasheet, WD listed WD40EFRX drives in their NAS datasheet. However, this was the first appearance of the WDx0EFAX drives in 10 and 12 gig sizes 2. Their latest datasheet, published in December 2019, lists both WDx0EFRX and WDxEFAX models for Reds. Interesting differences in Cache and speed listed between the two without explanation. 3. Amazon and others still have WDx0EFRX and WDx0EFAX drives listed separately, I purchased a "spare" WD Red over the weekend - it arrived today and is a WDe0EFRX model. 4. Qnap has a hardware compatibility list - My NAS, QNAP TS451, does not list WDx0EFAX as a compatible drive. It does have WDx0ERX spelled out. 5. On the Synology compatiblity list - the WD60EFAX and the WD20EFAX are listed as SMR Drives The following is not verfied - but was mentioned in the QNAP and Synology Forums. The WDx0EFAX drives may have been modified thru cache to give SMR drives better compatiblity with RAID. here is a link to the datasheets I've found https://drive.google.com/drive/folders/1EcjO5Pih7BilAshWhYcxbG6pFTwWWAOj?usp=sharing
  24. 2 points
    The spec sheet for the Dell XPS 8700 shows that the SATA ports are SATA 3.0. https://downloads.dell.com/manuals/all-products/esuprt_desktop/esuprt_xps_desktop/xps-8700_reference guide_en-us.pdf The 860 EVO is a very good drive. You may need a 2.5" to 3.5" drive adapter to mount it. https://downloads.dell.com/manuals/all-products/esuprt_desktop/esuprt_xps_desktop/xps-8700_owner's manual_en-us.pdf
  25. 2 points
  26. 2 points
    Good overview: https://www.servethehome.com/hpe-proliant-microserver-gen10-plus-v-gen10-hardware-overview/
  27. 2 points
    iLO 2.73 released https://support.hpe.com/hpsc/swd/public/detail?swItemId=MTX_ba3437a6c8d843f39ab5cace06 UPGRADE REQUIREMENTS: OPTIONAL ***ATTENTION*** Note for ESXi users: If you are booted from the Embedded SD Card, it is strongly recommended that you reboot the server immediately after updating the iLO firmware. FIRMWARE DEPENDENCY: Hewlett Packard Enterprise recommends the following or greater versions of iLO utilities for best performance: - RESTful Interface Tool (iLOREST) 2.3 - HPQLOCFG v5.2 - Lights-Out XML Scripting Sample bundle 5.10.0 - HPONCFG Windows 5.3.0 - HPONCFG Linux 5.4.0 - LOCFG v5.10.0 - HPLOMIG 5.2.0 KNOWN ISSUES: - Fibre Channel Ports are displayed with degraded status if they are configured but not attached. FIXES: The following issues are resolved in this version: - Added fix for Embedded Remote Support in an IPv6-only environment. - Added fix for Embedded Remote Support data collection for systems with multiple Smart Array Controllers. Enhancements: - Suppress SNMP traps for NIC link up/link down events that occur during POST.
  28. 2 points
    In theory It should work with a SATA m.2 disk yes as it uses a SATA controller what I assume @schoondoggy was referring to is a HBA adapter that allows for actual SATA disks to be added
  29. 2 points
    I had the same problem. The solution that worked for me was to change the Power Regular Settings to "OS Control Mode" in ILO. Hope this helps.
  30. 2 points
    I want a video, then when it all goes tits up he can at least upload it to Youtube
  31. 2 points
    I am interested in (old style): - motherboards/NIC's for 10GBE (especially with AMD CPUs) - computer cases for lots of disk, say 12+ … and anything in the 'Home and Family' topic series. Have a nice show! 🙂 PS trying to update my next door neighbour's 2009 laptop (Windows 7, HDD) to Windows 10 as I write.
  32. 2 points
    Hello, I found this and 3D printed this, not a great design but it does the job. https://www.thingiverse.com/thing:3205477
  33. 2 points
    Use a VPN if possible, there are quite a few vulnerabilities in the RDP stack so we have closed it for clients unless they specifically sign a waiver
  34. 2 points
    For the last 10 years or so, Ive been using Onenote to help manage these things. Ill download the manual, info etc. Scan receipts into the folder I have a separate page for each appliance Since its on onedrive, its available where ever i go With regards to extended warranties, i think of them as very limited insurance policies against the loss of the object. In the great majority of cases, its not a good deal. They offer it to you to make money Beware of confirmation bias. The handful of times it comes in handy are far more memorable than the majority of times it was a waste of money
  35. 2 points
    Hey Dave, A few things that might help and others: Haven't used the UDM yet (I'm waiting for the UDM Pro which is still in final beta), but... My understanding is that you can restore a CloudKey controller backup to the UDM built-in CloudKey. Personally, in your configuration, I wouldn't physically reconfigure and move coax feeds and equipment. I would install the UDM in the basement replacing existing gear with just a simple cable swap. Sure, you're wasting the built-in AP, but everything else is much more straightforward. Theoretically, you should be able to restore your cloudkey backup, and have almost the same network up and running in just a few minutes. Then you can start deconstructing or reconfiguring more at your leisure rather than necessity of getting the network up and running for the entire household with no downtime 🙂 In your review of your existing setup, IMHO, the primary benefit of Unifi, even more than the wide choice of physical AP units and mounting options, is the extensive configurability and monitoring/status options. You kinda touched at this towards the end of the podcast, but the ability to limit the radio power, turn off the auto settings, and assign the Wi-Fi channels (especially the crowded 2.4 GHz frequency) to non-overlapped channel numbers is a big win for anyone trying to fix dead spots or avoid buying extra AP's as a "brute force" solution to solving coverage. (Not that there is anything wrong with that; sometimes spending $100 on an extra AP instead of spending hundreds of dollars of time and effort to tweak, is the right choice.) It wasn't clear that you are full exploiting the Unifi flexibility to fix your Ring camera/doorbell problems. First thing I usually do with a Unifi setup is to create a 2.4 GHz only SSID and enable it only on the AP radio that is physically the right unit for the Doorbells (or any IoT device that only supports 2.4GHz) to connect. Overriding the autoconnect/automatic behavior in Ring and other devices and forcing the connection to a specific AP solves almost all the Wi-Fi problems with these and similar devices that have somewhat dumb Wi-Fi firmware or less than ideal reliability. It's worth the trouble to re-program the SSID inside the Ring or other device and the results are much better than just having multiple AP's hoping they are in range. I'm really curious whether the UDM will be successful in bringing Unifi to the general consumer market, but I'm skeptical it will really be able to displace Eero, Google, Orbi, and other true consumer gear. One irony is that right now the early adopters of the UDM are all sophisticated Unifi users and that thing doesn't fit and looks awful in their otherwise beautiful rack porn photos they have been posting 🙂 Granted the UDM is a lot cheaper than buying the equivalent individual parts, but there are advantages to being modular too. Easier service, not losing everything if a non-critical module goes down, etc. There will always be a lively discussion between modular or integrated that goes all the way back to mainframes with terminals versus minicomputers and later PC's, so not trying to re-ignite that long standing debate, but merely point out that saving money isn't always the most significant reason to choose one over another. In the case of Unifi, both fans and users are primarily looking for new functionality. Personally, I would prefer to see some new capabilities made available, regardless of whether it is all-in-one or requires a new box. I can work around price and modularity issues, but I can't work around the lack of a critical feature. So, to bring this home, the only feature that UDM provides that doesn't exist in the current gear is the new USG router/firewall. Specifically, the UDM is rated to handle 1 Gbps speeds with full hardware speed packet analysis and intrusion processing. The current USG is only able to handles 100mbps and is severely taxed in performance at that speed. This is significant because consumer fiber and high speed home Internet connections have zoomed from 3 mbps to over 1 Gbps in many urban and metropolitan areas. Since you mentioned you don't have a USG in your current setup, I think you aren't in a good position to really understand the difference provided by the UDM versus the existing Unifi gear. I know some Unifi users prefer to use a separate router or the Ubiquiti EdgeRouter products because of these limitations and thus don't have the integrated management provided by using the USG. On a positive note, the UDM finally removes the insecure PPTP VPN protocol, but has not yet added support for OpenVPN for incoming VPN (to connect back to your home when you are away, or to use your home network as your own private VPN Internet gateway instead of a paid service), and that is a bit disappointing.
  36. 2 points
    was looking for a youtube video on Qnap's Qvpn product and right up there on the top search results was a Home Server show legend - Mike Faucher aka PCDoc He has been relatively active recently on his YouTube channel and from what I can see has been posting useful and interesting content of interest to Reset users. Much like he was back in the halcyon days of the HomeServer show - He was our "Joe Friday" - "Just the facts mam" The QVPN video was exactly what I was looking for search for his name on youtube here is link to his website. https://thedocsworld.net/
  37. 2 points
    I have an X3421 Gen10 Microserver w/ stock 8GB RAM running Windows Server 2019 Eval. I did the standard install, then added HyperV and container support. This isn't part of a domain - nothing is set up beyond the initial install and normal Windows updates. The boot disk is an MX500 SSD attached to SATA5 with 2 4TB 3.5" drives in the cage. I was seeing ~20% CPU utilization for the SYSTEM process and ~22% for SYSTEM INTERRUPTS. I tracked that down to the vEthernet device. When I uninstalled HyperV and removed the vEthernet device it dropped down to ~12.5% CPU for SYSTEM and ~2.5% CPU for SYSTEM INTERRUPTS. Note that this is looking at the task manager performance tab. If I right click on SYSTEM and select Go To Details it shows SYSTEM taking about 6% CPU and SYSTEM INTERRUPTS taking about 2%. I don't know what causes this discrepancy. Running LatencyMon (for about a 30 second run) shows Highest measused time: 343 Highest reported ISR time: 33.8 (storport.sys) Highest reported DPC time: 212.3 (ndis.sys) Total hard page fault: 26 I'm not sure how to track down the source of the SYSTEM process CPU usage. I don't see anything unusual in the event logs. Truthfully I'm more concerned about the SYSTEM CPU usage, but both seem high to me for a server that is basically just sitting there. Is there something strange going on or is this just the cost of running Server 2019 on a relatively low performance CPU? This server is replacing Home Server 2011 on a MediaSmart EX495 and idle CPU on that was about 3-5%. I have installed bios ZA10A360, chipset driver WS2012R2_W8_1, AMD Chipset Graphic Driver 17.1.1, and the Broadcom NX1 driver (cp031155) - all from the HP site. Not sure what else to try. Is this normal for Server 2019 with no load? Thanks in advance for any advice, -Jim
  38. 2 points
    Just in case anyone has this problem, the solution was to install the latest Broadcom drivers - I used this https://www.dell.com/support/home/us/en/19/drivers/driversdetails?driverid=cn7mv&oscode=ws19l&productcode=poweredge-r430 and I am now down to around 2-3% cpu at idle. No idea why the Broadcom NX1 driver (cp031155) from the hpe site had such high cpu. -Jim
  39. 1 point
    The latest update I had from hardware.com was they were due in on the 17th, even though the website says the 14th.
  40. 1 point
    Looking at my drives, I tend to have purchased HGST. Although they are owned by WD they had their own designes and factories. Dont forget about Toshiba drives. Toshiba always built good drive. When WD bought HGST they had to sell some of HGST tech and factories to Toshiba. There seems to be a good deal of similarity between Toshiba Enterprise and NAS drives to the previous HGST designs. The issue of course is you dont see Toshiba go on sale.
  41. 1 point
    Even if you stick Win10 on there you can still run VM's on it as you can add the Hyper-V "feature" if you want to dabble.. As for an exploit being able to effect the server, depends on which exploit I should imagine, not something I've had any dealings with..
  42. 1 point
    For mini PCIe, I have used this for SATA: https://www.amazon.com/IO-Crest-Controller-Components-SI-MPE40125/dp/B072BD8Z3Y and this for dual port NIC: https://www.amazon.com/CREST-SI-MPE24046-Gigabit-Ethernet-Interface/dp/B01N9HNXBB They seem to work fine. On some systems you need to confirm it is a mini PCIe slot and not a mSATA slot. The only issue I have run into is the physical space with small systems from Dell and HP. The only adapter I have used with M.2 is one of these to use a U.2 drive: https://www.amazon.com/U-2-M-2-Adapter-Interface-Drive/dp/B073WGN61Y
  43. 1 point
    I've got two HP MediaSmart Servers, an EX487 and EX490, upgraded to Windows Server 2019. They handle it like a champ!
  44. 1 point
    why can one not answer multiple choices? can't eat just one!
  45. 1 point
    I imagine its a case of what they are certified for, rather than the max they can actually take, its not like they have the time to test every single drive and certify all of them..
  46. 1 point
    You should be able to set the ISP modem/router into bridge mode from the Broadband menu https://www.manualslib.com/manual/615384/Zyxel-Communications-Sbg3300-N-Series.html?page=52 Regards Matt
  47. 1 point
    I still cannot understand why you try to use bootable USB to update BIOS. It is so simple to update BIOS using iLO .... And yes, the right file to upload by iLO is CPQJ0613.684
  48. 1 point
    The onboard RAID is implemented with the Marvell SATA controller. Four SATA 6Gb/s ports AHCI or RAID 0-1-10 no cache.
  49. 1 point
    The process detailed in the above URL worked like a charm on my HPMSG8. It would have been nice to avoid having to use Windows, but a few minutes booting up my W10P VM to prepare the USB stick didn't hurt too badly... FWIW, -MB
  50. 1 point
    Hey Guys. Looks like there is an new official FW for the Digitus DS-30104-1 (with Marvell 88SE9230) dated 2018-12-21 on thier webseite. A bit unexpected - Found it by coincidence 🙂 PACKAGE VERSION[0xFFFFFFFF]: AUTOLOAD VERSION[0x00000000]: 200019 LOADER VERSION[0x0000C000]: 21001008 BIOS VERSION[0x00020000]: FIRMWARE VERSION[0x00030000]: https://www.digitus.info/en/products/computer-accessories-and-components/computer-accessories/io-cards/ds-30104-1/ http://ftp.assmann.com/pub/DS-/DS-30104-1___4016032330240/DS-30104-1_firmware_mul_DS-30104-1 Firmware_20181221.zip Installed it on my DS-30104-1 - Works good!
This leaderboard is set to Indiana - Indianapolis/GMT-04:00
  • Create New...