Jump to content
RESET Forums (homeservershow.com)

Leaderboard

  1. schoondoggy

    schoondoggy

    Moderators


    • Points

      97

    • Content Count

      8,962


  2. Trig0r

    Trig0r

    Moderators


    • Points

      34

    • Content Count

      1,416


  3. Al_Borges

    Al_Borges

    Members


    • Points

      22

    • Content Count

      269


  4. Dave

    Dave

    Administrators


    • Points

      19

    • Content Count

      4,910



Popular Content

Showing content with the highest reputation since 07/08/2019 in Posts

  1. 5 points
    As I do not plan to build any more of the SDM Rev4 brackets, I have decided to share the schematic. https://1drv.ms/u/s!Ao9ObX1BCpuXsstkeE7Jc5cyRmYTFQ?e=8ouDcP Edit- This is a Visio file. If you download it and open it Visio you will see the scale of the drawing.
  2. 4 points
    Ok, it's time to solve all problems with overheating. Stuff: 1x Noctua NF-A8-5V fan with USB power adaptor cable 1x Akasa 3D Fan Guard AK-FG08-SL https://imgur.com/a/7npqgdc Now LSI 9271-4i temperature approximately no more than 57 degrees.
  3. 3 points
    Thanks for the reply. It turns out I’d bent a few pins on the CPU socket. God only knows how!! I managed to carefully bend the pins back and I’ve managed to boot up with RAM slots 1 and 2 populated. All seems to be running ok for now. Spot the bent pins in the pic 😬
  4. 3 points
    Are you apologetic because you are truly sorry or that you got caught? - Said every Mom on the planet.
  5. 3 points
    Thanks for guidance on this! Just a note on the solution I settled with in the end for anyone else who might come across this thread looking to do the same thing - Tried a couple different cheaper eSATA cards with no success having the drives in ICYCube detected in CentOS. Looked into driver issues and again with no luck. The USB connection did work fine, but I wasn't overly happy with that as a solution. I instead got this HBA (10Gtek Host Bus Adapter - https://www.amazon.co.uk/gp/product/B01M9GRAUM) with Mini SAS connection, and swapped the ICYCube for an alternative external enclosure with compatible Mini SAS connection (SilverStone SST-TS431S-V2 - https://www.amazon.co.uk/gp/product/B0771S45X3). On setup this worked perfectly, with the drives properly detected. I'm actually also a little happier with the SilverStone unit in general so far, it feels of higher build quality and fan also seems a little quieter. I also decided to move away from using the B120i RAID controller, and hardware raid in general, and am now using ZFS / RAIDZ. Thanks again for help as I was working through this!
  6. 3 points
    It's not a server unless it sounds like a wind tunnel
  7. 2 points
    TDP of 69W is very high, even with the 65W replacement cooler from HP that costs a fortune, so I personally wouldn't use that CPU.
  8. 2 points
    It appears that should work, as it is unbuffered and ECC The MS Gen8 does not require HP branded memory.
  9. 2 points
    Sorted it. I created a UEFI boot USB and it worked.
  10. 2 points
    I have run four consumer grade SSD on it.
  11. 2 points
    Had some time on my hands and I’ve been running ESXi since I first got the microserver and upgrading until 6.5 update 2 with all the faff from downgrading drivers. I saw a path to migrate all my guests to hyper-v with Microsoft VM migration tool. So long story short, HP b120i raid driver with windows server 2019 hyper-v image works. Was a bit concerned as only listed for Server 2016. Installed openssh packages and windows admin center as I don’t use enterprise/pro windows at home. Even put on the windows subsystem for Linux so from command line I can be in a more familiar shell. Not a powershell user for the most part. I shutdown all guests and copied them off somewhere safe. Biggest VM disk was 1TB. Used pigz to speed up compression when I tar them all up, the 1TB was actually mostly allocation, Not data, which shrinks massively when compressed. linux subsystem was very handy to move back and expand after wiped the disks with windows install. All images needed connecting to new virtual router and reverted to DHCP on NIC and had to be readdressed where I had statics. Windows Admin Center allowed RDP to consoles for all to do this. I have one pc with windows home and edge on it. Had a mix of freebsd, linux and one windows image. Amazingly pain free, faster and solid. Only painful part was the time it took to convert all the virtual disks. I kind of wish I had done this sooner but I think the advent of windows admin center is what made it possible as I can build a guest over web interface just like ESXi from anywhere. I don’t do any hardware passthru so can’t speak to this or complicated guests. If like me your wondering if dumping ESXi for something else because you used the b120i softraid and are too cheap to buy a raid card to replace it I think this is the least worst solution. I was going to try proxmox if I failed miserably but it worked out fine. Found this guide helpful https://www.nakivo.com/blog/how-to-convert-vmware-vm-to-hyper-v/
  12. 2 points
    I'd take the 1265, it has hyperthreading and runs cooler. Bit more useful info here on ARK. https://ark.intel.com/content/www/us/en/ark/compare.html?productIds=65734,65728
  13. 2 points
    as long as it doesn't block airflow in the case you will be fine. I've attached SSD's with double sided tape to inside and lived
  14. 2 points
    It does bring many things in to question, I have gone cheap on some things, others not so much. my DVD/BR Rips that make up probably 90-95% of the data usage I do not want to have to replace but I can consequently I have a RAID 6 array, the critical data is on RAID1 as for losing a RAID on rebuild, its an old fear mongering article but it gets the point across is this one below. it explains the math behind what happens https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/ Obviously it did not/does not stop working but the efficacy of it as data protection becomes a greater and greater issue as storage devices increase in size
  15. 2 points
    Bump this topic:) So I upgrade my microserver 8 to xeon 1270v2, and decided to install active cooling I try to won on ebay Scythe Kozuti, but oops. And then read this topic and bought Noctua NH-L9i. But things about to install it on cable tire not good. So I found a guy with milling machine, and he made a "short" legs for L9i Original nocuta upside, milled - below Cooler perfectly fit on board using Noutua screws with additional spacers from tray side (hello HPE with non-standart 1155 dimensions). And don't forgot remove nuts for original heatsink from motherboard tray. Cooler connected to case fan connector with Y-splitter.
  16. 2 points
    Yeah lets be fair, you say to their credit, if they hadn't been caught out then we'd all still be in the dark, they didn't do this through choice, someone somewhere at WD made the call not to inform the customer.
  17. 2 points
    Hi all, Here's a guide I would like to share around Windows Storage spaces and creating a 4x drive Parity pool In a nutshell I have Windows Serer 2019 and storage space parity pool running very nicely on my Gen8. Here's the configuration I used and How to copy my setup. (I still believe ZFS or UnRAID are far better choice as a filesystem on these limited servers, but if you need Windows like I do, then storage spaces can be an excellent alternative.) This is my "best effort" guide and by no means perfect. It does however yield excellent results for both read and write speeds. Gen8 Microserver 16GB RAM CPU Stock for now (1270 V3 on it's way) Disks 4x 3TB WD NAS drives in front bays SSD - Samsung Evo 850 265 First lesson, DONT use the Windows GUI to create the pool or Virtual disk as the GUI applies terrible defaults that you can't edit and will ruin performance. Also make sure you're on the latest version of Windows server as a LOT has changed and been improved recently. You must use PowerShell. Terms: PhysicalDiskRedundancy - Parity Columns - 4 (The data segments stripped to disks. Should match your 4 disks) Interleve - 256K (The amound of data written to each "column" or disk. In this case 256KB interleave gives us a 64K write to each disk) LogicalSectorSize - 4096 PhysicalSectorSize - 4096 REFS/NTFS Cluster - 64K Overall configuration: 4 drive file system, one bootable SSD in RAID mode. BIOS setup initial F9 into the BIOS and set the B120i controller into RAID mode F5 into the RAID manager and create 1 individual RAID0 logical drive for the SSD Set the SSD as the preferred boot drive (Yes in the same screen) Set the cluster size to 63 Enable caching Windows install Install Windows 2019 Server Standard GUI edition from ISO Offer up the B120i RAID drivers via a USB stick so the wizard can see the SSD RAID0 drive. Filename p033111.exe (Have them extracted) Windows update and patch and reboot BIOS setup post windows Once windows is up and running go back into the F5 RAID manager and finish the setup of the 4 front drives into 4x RAID0 Check the SSD is still set as the preferred boot drive (Yes in the same screen) Set the cluster size to 63 Windows config of storage spaces At this point you should see 4 individual drives ready to be used as a Storage pool Try to set each disk to have a cache (Not all drives support this) Win + X to open the side menu Device Manager Expand Disk Drives Right Click the "HP Logical Volume" for each drive Check - "Enable write caching on the device" (If it doesn't work don't stress, it's optional but nice to have) Powershell - Run as Admin Determine the physical drisks available for the pool we're about to create Get-PhysicalDisk | ft friendlyname, uniqueid, mediatype, size -auto Your output will look something like this, so identify the 4 drives that are the same and take note of their uniqueID Mine are the bottom four drives all 3TB in size friendlyname uniqueid size ------------ -------- ---- SSD HP LOGICAL VOLUME 600508B1001C5C7A1716CCDD5A706248 250023444480 HP LOGICAL VOLUME 600508B1001CAC8AFB32EE6C88C5530D 3000559427584 HP LOGICAL VOLUME 600508B1001C51F9E0FF399C742F83A6 3000559427584 HP LOGICAL VOLUME 600508B1001C2FA8F3E8856A2BF094A0 3000559427584 HP LOGICAL VOLUME 600508B1001CDBCE168F371E1E5AAA23 3000559427584 Rename the friendly name based on the UniqueID from above and set to "HDD type" Set-Physicaldisk -uniqueid "Your UniqueID" -newFriendlyname Disk1 -mediatype HDD You will need to run that 4 times with each UniqueID code and create a new friendly name for each drive. I called mine "Drive 1, Drive 2" etc Set-Physicaldisk -uniqueid "600508B1001C2FA8F3E8856A2BF094A0" -newFriendlyname Disk1 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CDBCE168F371E1E5AAA23" -newFriendlyname Disk2 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CAC8AFB32EE6C88C5530D" -newFriendlyname Disk3 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001C51F9E0FF399C742F83A6" -newFriendlyname Disk4 -mediatype HDD Verify the disks have been set correctly The following example shows which physical disks are available in the primordial server and CAN be used in the new Pool. You're just checking here if the friendly name renaming worked and they are all set to HDD type. Primordial just means on your local server and available. Get-StoragePool -IsPrimordial $true | Get-PhysicalDisk | Where-Object CanPool -eq $True You should see your four drives with nice names that you set like "Disk1" Now find out your sub system name, as we need this for the next command. Just take note of it. Example "Windows Storage on <servername>" Mine is ""Windows Storage on Radaxian" Get-StorageSubSystem The following example creates a new storage pool named "Pool1" that uses all available disks and sets the cluster size. New-StoragePool -FriendlyName Pool1 -StorageSubsystemFriendlyName "Windows Storage on Radaxian" -PhysicalDisks (Get-PhysicalDisk -CanPool $True) -LogicalSectorSizeDefault 64KB Now create the Virtual Disk on the new pool with 4x disks and Partity set correctly. (This is critical to do via PowerShell) New-VirtualDisk -StoragePoolFriendlyName "Pool1" -FriendlyName "VDisk1" -ResiliencySettingName Parity -NumberOfDataCopies 1 -NumberOfColumns 4 -ProvisioningType Fixed -Interleave 256KB -UseMaximumSize Those two commands should complete without error, if they don't go back and check your syntax Go back into the Windows GUI and open this Server Manager\File and Storage Services\Servers You should see the Storage pool listed and the Virtual disk we created in the previous steps. Storage pool - Pool1 Virtual Disk - VDisk1 Select Disks in the GUI Identify your new VDisk1 and right click it. Set to Online, this will also set it to use a GPT boot record On the same screen in the below pane Volumes Click TASKS and select "New Volume" Select REFS and Sector size of 64K Enter a volume name like "Volume1" or whatever you want to call it Select a drive letter such as Z (You can use NTFS here for slightly better performance, but I'm sticking to REFS as it has some benefits) You'll now have a Storage pool, Virtual disk on top and a volume created with optimal settings Go back into Power Shell Enable power protected status if applicable (Just try it, no harm) (Ideally here you should have your server connected to a basic UPS to protect it from power outages) Set-StoragePool -FriendlyName Pool1 -IsPowerProtected $True Check if the new sector sizes of Virtual disk and all relevant settings are correct Get-VirtualDisk | ft FriendlyName, ResiliencySettingName, NumberOfColumns, Interleave, PhysicalDiskRedundancy, LogicalSectorSize, PhysicalSectorSize Example output FriendlyName ResiliencySettingName NumberOfColumns Interleave PhysicalDiskRedundancy LogicalSectorSize PhysicalSectorSize VDisk1 Parity 4 262144 1 4096 4096 You're done.... enjoy the new Volume. At this point you can share out your new Volume "Z" and allow client computers to connect. Some other commands in Power Shell that I found useful Get more verbose disk details around sectors. Get-VirtualDisk -friendlyname Vdisk1 | fl Get-PhysicalDisk | select FriendlyName, Manufacturer, Model, PhysicalSectorSize, LogicalSectorSize | ft Check if TRIM is enabled. This output should be 0 fsutil behavior query DisableDeleteNotify If TRIM is not enabled, you can set it on with these commands fsutil behavior set disabledeletenotify ReFS 0 fsutil behavior set disabledeletenotify NTFS 0 Check the Power Protected status and cache Get-StorageAdvancedProperty -PhysicalDisk (Get-PhysicalDisk)[0] Once your data has been migrated back to your new pool from backup, make sure you run this command to "spread out the data" properly. This command rebalances the Spaces allocation for all of the Spaces in the pool named SQLPool. Optimize-StoragePool -FriendlyName "Pool1" I'm yet to get my Xeon in the mail, but once that's installed I think the disk performance will go up even higher as the stock CPU is junk.
  18. 2 points
    Lots of data in this article: https://www.servethehome.com/surreptitiously-swapping-smr-into-hard-drives-must-end/
  19. 2 points
    They didn't say anything about resilvering... which puts any drive that's being added to an already utilized RAID volume.into a relentless write task that consequently puts a strain on any SMR drive. It doesn't matter if it's in a datacenter or a home NAS scenario.... resilvering works the same way and puts a lot of stress to the drives in the volume. This is classic bait 'n switch, WD. Not good.
  20. 2 points
    Red NAS Drives SMR versus CMR I've investigated this a bit further and came up with the following conclusions. Western Digital is not transparent with any of this information - so this is based just on what I found 1. The WDx0EFRX drives appear to be the older model - I purchased WD Reds in 2013 and they match the 2013 datasheet. Ditto for some 3TB Reds I bought in 2016. As recently as the 2018 datasheet, WD listed WD40EFRX drives in their NAS datasheet. However, this was the first appearance of the WDx0EFAX drives in 10 and 12 gig sizes 2. Their latest datasheet, published in December 2019, lists both WDx0EFRX and WDxEFAX models for Reds. Interesting differences in Cache and speed listed between the two without explanation. 3. Amazon and others still have WDx0EFRX and WDx0EFAX drives listed separately, I purchased a "spare" WD Red over the weekend - it arrived today and is a WDe0EFRX model. 4. Qnap has a hardware compatibility list - My NAS, QNAP TS451, does not list WDx0EFAX as a compatible drive. It does have WDx0ERX spelled out. 5. On the Synology compatiblity list - the WD60EFAX and the WD20EFAX are listed as SMR Drives The following is not verfied - but was mentioned in the QNAP and Synology Forums. The WDx0EFAX drives may have been modified thru cache to give SMR drives better compatiblity with RAID. here is a link to the datasheets I've found https://drive.google.com/drive/folders/1EcjO5Pih7BilAshWhYcxbG6pFTwWWAOj?usp=sharing
  21. 2 points
    The spec sheet for the Dell XPS 8700 shows that the SATA ports are SATA 3.0. https://downloads.dell.com/manuals/all-products/esuprt_desktop/esuprt_xps_desktop/xps-8700_reference guide_en-us.pdf The 860 EVO is a very good drive. You may need a 2.5" to 3.5" drive adapter to mount it. https://downloads.dell.com/manuals/all-products/esuprt_desktop/esuprt_xps_desktop/xps-8700_owner's manual_en-us.pdf
  22. 2 points
    https://www.woot.com/plus/microsoft-surface-books-surface-pro-4-tablets?ref=w_cnt_gw_dly_wobtn
  23. 2 points
    Good overview: https://www.servethehome.com/hpe-proliant-microserver-gen10-plus-v-gen10-hardware-overview/
  24. 2 points
    iLO 2.73 released https://support.hpe.com/hpsc/swd/public/detail?swItemId=MTX_ba3437a6c8d843f39ab5cace06 UPGRADE REQUIREMENTS: OPTIONAL ***ATTENTION*** Note for ESXi users: If you are booted from the Embedded SD Card, it is strongly recommended that you reboot the server immediately after updating the iLO firmware. FIRMWARE DEPENDENCY: Hewlett Packard Enterprise recommends the following or greater versions of iLO utilities for best performance: - RESTful Interface Tool (iLOREST) 2.3 - HPQLOCFG v5.2 - Lights-Out XML Scripting Sample bundle 5.10.0 - HPONCFG Windows 5.3.0 - HPONCFG Linux 5.4.0 - LOCFG v5.10.0 - HPLOMIG 5.2.0 KNOWN ISSUES: - Fibre Channel Ports are displayed with degraded status if they are configured but not attached. FIXES: The following issues are resolved in this version: - Added fix for Embedded Remote Support in an IPv6-only environment. - Added fix for Embedded Remote Support data collection for systems with multiple Smart Array Controllers. Enhancements: - Suppress SNMP traps for NIC link up/link down events that occur during POST.
  25. 2 points
    In theory It should work with a SATA m.2 disk yes as it uses a SATA controller what I assume @schoondoggy was referring to is a HBA adapter that allows for actual SATA disks to be added
  26. 2 points
    I had the same problem. The solution that worked for me was to change the Power Regular Settings to "OS Control Mode" in ILO. Hope this helps.
  27. 2 points
    I want a video, then when it all goes tits up he can at least upload it to Youtube
  28. 2 points
    I am interested in (old style): - motherboards/NIC's for 10GBE (especially with AMD CPUs) - computer cases for lots of disk, say 12+ … and anything in the 'Home and Family' topic series. Have a nice show! 🙂 PS trying to update my next door neighbour's 2009 laptop (Windows 7, HDD) to Windows 10 as I write.
  29. 2 points
    Hello, I found this and 3D printed this, not a great design but it does the job. https://www.thingiverse.com/thing:3205477
  30. 2 points
    Use a VPN if possible, there are quite a few vulnerabilities in the RDP stack so we have closed it for clients unless they specifically sign a waiver
  31. 2 points
    For the last 10 years or so, Ive been using Onenote to help manage these things. Ill download the manual, info etc. Scan receipts into the folder I have a separate page for each appliance Since its on onedrive, its available where ever i go With regards to extended warranties, i think of them as very limited insurance policies against the loss of the object. In the great majority of cases, its not a good deal. They offer it to you to make money Beware of confirmation bias. The handful of times it comes in handy are far more memorable than the majority of times it was a waste of money
  32. 2 points
    Hey Dave, A few things that might help and others: Haven't used the UDM yet (I'm waiting for the UDM Pro which is still in final beta), but... My understanding is that you can restore a CloudKey controller backup to the UDM built-in CloudKey. Personally, in your configuration, I wouldn't physically reconfigure and move coax feeds and equipment. I would install the UDM in the basement replacing existing gear with just a simple cable swap. Sure, you're wasting the built-in AP, but everything else is much more straightforward. Theoretically, you should be able to restore your cloudkey backup, and have almost the same network up and running in just a few minutes. Then you can start deconstructing or reconfiguring more at your leisure rather than necessity of getting the network up and running for the entire household with no downtime 🙂 In your review of your existing setup, IMHO, the primary benefit of Unifi, even more than the wide choice of physical AP units and mounting options, is the extensive configurability and monitoring/status options. You kinda touched at this towards the end of the podcast, but the ability to limit the radio power, turn off the auto settings, and assign the Wi-Fi channels (especially the crowded 2.4 GHz frequency) to non-overlapped channel numbers is a big win for anyone trying to fix dead spots or avoid buying extra AP's as a "brute force" solution to solving coverage. (Not that there is anything wrong with that; sometimes spending $100 on an extra AP instead of spending hundreds of dollars of time and effort to tweak, is the right choice.) It wasn't clear that you are full exploiting the Unifi flexibility to fix your Ring camera/doorbell problems. First thing I usually do with a Unifi setup is to create a 2.4 GHz only SSID and enable it only on the AP radio that is physically the right unit for the Doorbells (or any IoT device that only supports 2.4GHz) to connect. Overriding the autoconnect/automatic behavior in Ring and other devices and forcing the connection to a specific AP solves almost all the Wi-Fi problems with these and similar devices that have somewhat dumb Wi-Fi firmware or less than ideal reliability. It's worth the trouble to re-program the SSID inside the Ring or other device and the results are much better than just having multiple AP's hoping they are in range. I'm really curious whether the UDM will be successful in bringing Unifi to the general consumer market, but I'm skeptical it will really be able to displace Eero, Google, Orbi, and other true consumer gear. One irony is that right now the early adopters of the UDM are all sophisticated Unifi users and that thing doesn't fit and looks awful in their otherwise beautiful rack porn photos they have been posting 🙂 Granted the UDM is a lot cheaper than buying the equivalent individual parts, but there are advantages to being modular too. Easier service, not losing everything if a non-critical module goes down, etc. There will always be a lively discussion between modular or integrated that goes all the way back to mainframes with terminals versus minicomputers and later PC's, so not trying to re-ignite that long standing debate, but merely point out that saving money isn't always the most significant reason to choose one over another. In the case of Unifi, both fans and users are primarily looking for new functionality. Personally, I would prefer to see some new capabilities made available, regardless of whether it is all-in-one or requires a new box. I can work around price and modularity issues, but I can't work around the lack of a critical feature. So, to bring this home, the only feature that UDM provides that doesn't exist in the current gear is the new USG router/firewall. Specifically, the UDM is rated to handle 1 Gbps speeds with full hardware speed packet analysis and intrusion processing. The current USG is only able to handles 100mbps and is severely taxed in performance at that speed. This is significant because consumer fiber and high speed home Internet connections have zoomed from 3 mbps to over 1 Gbps in many urban and metropolitan areas. Since you mentioned you don't have a USG in your current setup, I think you aren't in a good position to really understand the difference provided by the UDM versus the existing Unifi gear. I know some Unifi users prefer to use a separate router or the Ubiquiti EdgeRouter products because of these limitations and thus don't have the integrated management provided by using the USG. On a positive note, the UDM finally removes the insecure PPTP VPN protocol, but has not yet added support for OpenVPN for incoming VPN (to connect back to your home when you are away, or to use your home network as your own private VPN Internet gateway instead of a paid service), and that is a bit disappointing.
  33. 2 points
    was looking for a youtube video on Qnap's Qvpn product and right up there on the top search results was a Home Server show legend - Mike Faucher aka PCDoc He has been relatively active recently on his YouTube channel and from what I can see has been posting useful and interesting content of interest to Reset users. Much like he was back in the halcyon days of the HomeServer show - He was our "Joe Friday" - "Just the facts mam" The QVPN video was exactly what I was looking for search for his name on youtube here is link to his website. https://thedocsworld.net/
  34. 2 points
    I have an X3421 Gen10 Microserver w/ stock 8GB RAM running Windows Server 2019 Eval. I did the standard install, then added HyperV and container support. This isn't part of a domain - nothing is set up beyond the initial install and normal Windows updates. The boot disk is an MX500 SSD attached to SATA5 with 2 4TB 3.5" drives in the cage. I was seeing ~20% CPU utilization for the SYSTEM process and ~22% for SYSTEM INTERRUPTS. I tracked that down to the vEthernet device. When I uninstalled HyperV and removed the vEthernet device it dropped down to ~12.5% CPU for SYSTEM and ~2.5% CPU for SYSTEM INTERRUPTS. Note that this is looking at the task manager performance tab. If I right click on SYSTEM and select Go To Details it shows SYSTEM taking about 6% CPU and SYSTEM INTERRUPTS taking about 2%. I don't know what causes this discrepancy. Running LatencyMon (for about a 30 second run) shows Highest measused time: 343 Highest reported ISR time: 33.8 (storport.sys) Highest reported DPC time: 212.3 (ndis.sys) Total hard page fault: 26 I'm not sure how to track down the source of the SYSTEM process CPU usage. I don't see anything unusual in the event logs. Truthfully I'm more concerned about the SYSTEM CPU usage, but both seem high to me for a server that is basically just sitting there. Is there something strange going on or is this just the cost of running Server 2019 on a relatively low performance CPU? This server is replacing Home Server 2011 on a MediaSmart EX495 and idle CPU on that was about 3-5%. I have installed bios ZA10A360, chipset driver WS2012R2_W8_1, AMD Chipset Graphic Driver 17.1.1, and the Broadcom NX1 driver (cp031155) - all from the HP site. Not sure what else to try. Is this normal for Server 2019 with no load? Thanks in advance for any advice, -Jim
  35. 2 points
    Just in case anyone has this problem, the solution was to install the latest Broadcom drivers - I used this https://www.dell.com/support/home/us/en/19/drivers/driversdetails?driverid=cn7mv&oscode=ws19l&productcode=poweredge-r430 and I am now down to around 2-3% cpu at idle. No idea why the Broadcom NX1 driver (cp031155) from the hpe site had such high cpu. -Jim
  36. 1 point
    Just for reference I ordered the following RAM which works. Crucial CT2K16G4DFD8266 32 GB Kit (16 GBx2) (DDR4, 2666 MT/s, PC4-21300, Dual Rank x8, DIMM, 288-Pin) https://www.amazon.co.uk/gp/product/B0736W5BH2/ref=ppx_yo_dt_b_asin_title_o06_s00?ie=UTF8&psc=1
  37. 1 point
    You need to confirm that the card is bootable. Instead of a cheap PCIe SATA card, you could use a LSI HBA, start around $30. Connect the four drive bays to the LSI and run the SSD from port 1 of the onboard SATA controller.
  38. 1 point
    Fire up the MCT and it'll download Win10 to the USB stick for you, then boot from it and install..
  39. 1 point
    No. I have a RedHat 8 installed on it running from the same SSD. I'm using the OS for my apps and KVM is just for lab servers.
  40. 1 point
    Sorry, I did not see your reply, yes those should work.
  41. 1 point
    Great video on exactly what SMR drives are and why they are a problem in "re silvering" a raid array Diid not know that write heads are twice as wide as read heads
  42. 1 point
    USB keyboard will be fine, either directly to the laptop or via the dock..
  43. 1 point
    I throw this out every once in a while. Is anyone interested in writing up "semi-formal" reviews here on the forums? I say semi-formal because they don't have to be pro level, just a good attempt at telling the story about the gear. Something you have purchased lately. You don't have to go buy stuff, just incorporate what you have already have purchased. Hit me up with any questions. You never know where it will lead!
  44. 1 point
    when last I did one of these I put the ssd in a usb dock, used the included software to clone the c drive to it, then switched put the ssd in the place where the spinner was. hopefully you have some backup mechanism in case issues arise.
  45. 1 point
    Guys think its a bug in linux firefox,chromium because after booting into windows in Edge it shows everything as it should. so its FIXED! kinda..
  46. 1 point
    The Microserver Gen8 has issues booting form the ODD port. If you are using the onboard SATA controller for the front bay drives, the ODD SATA port will only be bootable if the B120i controller is enabled, not AHCI. In your case it sounds like you have the front bays connected to your P212, so this should not be an issue. I would recommend using a SFF-8087 break out cable, connect it to the SFF-8087 connector on the system board and use the first SATA port from that connection. Of the four SATA ports on the SFF-8087 connector the first two are SATA III 6Gb/s so you will get the best performance from your SSD. Also there are no boot issues using the SFF-8087 SATA ports.
  47. 1 point
    Perhaps we should all sit around the Christmas tree shucking drives
  48. 1 point
    It may be a good idea to not connect the drives to the new P410 until you get the driver loaded and update the firmware.
  49. 1 point
    It seems to be going fine, no issues. VPN works as expected. The VAR that installed it takes care of the technical aspects. The customer complains about the support cost, but thy do not have their own IT, so they need the help. Fortinet does very well with SMB and education. I am not sure how cost effective their WAP's are, but they seem to be a nice end to end solution.
  50. 1 point
    I just asked the questions. Others provided the answers. And I thank them again as my server is still humming along just fine.
This leaderboard is set to Indiana - Indianapolis/GMT-04:00
×
×
  • Create New...