Jump to content
RESET Forums (homeservershow.com)

All Activity

This stream auto-updates     

  1. Today
  2. Al_Borges

    Contemplating A New Build

    In the past, I have used Acronis Products and , while not free are pretty reasonably priced. They work well
  3. Yesterday
  4. Trig0r

    Contemplating A New Build

    Yeah theres a free Veeam product, cant remember the name off of the top of my head at the mo, though, backup and replication or something I think they changed the name too recently..
  5. mattb75

    Over my head... a little

    Hi Have used ESXi and Hyper-V separately and now run a hybrid of both in my home network. Hyper-V inside a windows box running all the home network VM’s (eg UniFi Controller, Plex server, a Nginx reverse proxy for external access) - if something on the home use network side fails everyone in the house knows this box can be powered down and back up again and 99 times out of 100 everything will simply start back up again and work. Also having all the home file shares on a physical windows machine rather than inside a VM means if anything happens to me it should be easier for anyone else to work out what’s setup where! For lab use though I’d use ESXi every time. Find it much more configurable, more per-built images available and, dare I say it, enjoyable to use than Hyper-V!
  6. Last week
  7. SortaOldguy

    Over my head... a little

    I'm interested in doing something similar. Back the homeservershow days I built a server using an ASUS motherboard and i3-2100. Installed Windows Server 2012 that I had a license for back then and installed WHS2011 as a VM along with some other desktop OS's. Used WHS to back up the desktops on the home network, but that started getting funky. Used an HD Homerun to record cable TV and save it to the WHS but got rid of cable in favor of AT&T gigabit (which is great). Win Server 2012 is fairly outdated and WHS2011 isn't useful for much. So, I want redo thing,s including maybe using MS Hyper V 2019 stand alone to run various OS's. The hardware has been working fine for about 6 years with no problems but the occasional drive failure requiring some trouble getting Storage Spaces to release the faulty drive which seems to require moving files around and deleting volumes. But the server won't last forever. Been looking at Youtube videos about Dell R710's being used to set up virtual servers. Appears you can pick up a used one coming off lease for about $200 and was thinking getting one., installing Hyper V Server 2019 which sounds like what you want to do.
  8. skipcox

    Contemplating A New Build

    Is there a piece or pieces of software that can backup my client computers and the server itself similar to what 2016 essentials does?
  9. skipcox

    Contemplating A New Build

    Thanks for the win 10 solutions. I did not realize there was such a version.
  10. JackoUK

    Contemplating A New Build

    Sounds like Windows Pro for Workstations will be a good fit for you. Features: - 4 CPUs - ReFS with additional data integrity checking - Storage Spaces with automatic data rebuilding of failed disks (needs lots of disks to be effective though) - SMB multichannel to combine multiple network connections for bandwidth and resilience. - increased session limit (not sure: 40 instead of Pro's 20?) Probably half the cost of a Server license and no CALS or such stuff.
  11. Trig0r

    Over my head... a little

    Why do you want to go bare metal? What are you trying to achieve?
  12. Trig0r

    Contemplating A New Build

    Whats stopping you running a desktop OS, say Win10, and then just have a data drive that the machines back up to with something like Veeam?
  13. Does anyone already have some experience with a non-standard CPU? https://www.servethehome.com/hpe-proliant-microserver-gen10-plus-ultimate-customization-guide/2/ E.g. Xeon E-2236
  14. Juggy

    Gen10+ Anyone got one yet?

    I have one, got the 16GB ram, Xeon 2224 version. Put in a cheapish NVMe card and a 512GB Samsung PM981 NVMe SSD for OS (works perfectly), trunked 2 of the network ports LACP style for 2gbps throughput. Also have 4 x 10TB Ironwolf drives in software RAID 0 (no need for data protection as it is mainly media that is replicate externally). I cannot for the life of me get the shared ILO working. Been a Dell guy for over 20 years and their IDRAC is much easier to use. A really nice little device albeit it a bit pricey
  15. There is no label on the Gen10+ chassis that I could find
  16. skipcox

    Contemplating A New Build

    I want to build a new home server using a quad socket xeon E-5 4600 motherboard. I know it’s overkill, but I just want to do it. I will be using it to back up 5 machines with no virtual machines and no domain joined machines. Basically, I want it to function like a WHS 2011 server or a Windows Server 2016 Essentials server using the skipdomainjoin script. (I have a single socket machine running Win Serve 2016 Essentials with the skipdomainjoin script now). My question is what software can I use to run it. I know I can’t use any Windows Server essentials because they only support two processors. It looks like Windows Server 2016 Standard is out of the question due to the core server and CAL licenses. I thought about Windows Server 2012 R2 Standard as they on charge for the number of processors, not cores. I would also like to know how many Server/Client Cals i would need for Win Serve 2012 R2 and Win Serve 2016. Any thoughts would be appreciated. Thanks, Skip
  17. kfonda

    Over my head... a little

    Hi, I just managed to get my hands on an HP DL380p Gen8 with Dual Xeon E5-2665 8 core processors, 128GB of ram and 8 600GB SAS HD's. My plan is to set it up as a home lab server running a bare metal hypervisor. I have no real experience with any of the hypervisors and would like some opinions on what I should start with. Thanks in advance for any help.
  18. Trig0r

    How do I use the Media Creation Tool?

    Fire up the MCT and it'll download Win10 to the USB stick for you, then boot from it and install..
  19. I solved my problem. With the megaclid command (ctrl+y), i created a raid volume, and i installed the OS with a bootable usb. I was able to select the volume raid during the installation. The bios boots on the raid voume of the raid controller LSI I tried to enter into configuration mode (WebBIOS) by press CTRL+H it's ok. 🙂
  20. I'm about ready to replace my desktop's HD with an SSD - waiting to get the 3.5" to 2.5" adapter. I've got a good USB drive that I'm ready to use for this purpose. I've downloaded the Media Creation Tool for this purpose. Now, it's what do I do next with the USB drive and the Media Creation Tool? Do I simply copy the Media Creation Tool onto the USB drive? That doesn't seem right. I'm guessing that I have to plug the USB drive into my desktop, then run the Media Creation Tool. And then what?
  21. Trig0r

    Gen10+ Anyone got one yet?

    Ok, so given I'm not needing to use a PCI-e card for NIC's anymore I could drop some sort of SSD in there using a card, are they bootable? If they are then I could get a smaller drive for boot and then the larger one for VM's to sit on... Obviously I'd like to reuse as much as possible, but its looking like just the 3.5" drives are going to be swapped over and then everything else replaced..
  22. The label on the chassis
  23. HJB

    HP Proliant ML350p G8 Watercooling!

    High Bruno welcome to the club. I hope you do take the time to follow trough and publish your modifications as weel so we can all learn from each other. Just for all of you to know I just succeeded under Proxmox in passing trough a PCIE Slot with an LSI HBA Controller into a Freenas VM. Yes I'm running Freenas virtualized and am happy with it. You can find the topic here.
  24. This is an effort from me for the Community to document my quest of getting Freenas to run as a VM under Proxmox with a LSI HBA Adapter forwarded to Freenas so we get the best of both worlds. My reasoning for this was that I did not want to miss the comfort and stability of either Proxmox or Freenas but refuse to run 2 separate Servers as I have a very well equipped HPE Proliant ML350P G8. Proxmox documentation showed me that the IOMMU has to be activated on the kernel commandline. The command line parameters are: for Intel CPUs: intel_iommu=on for AMD CPUs: amd_iommu=on The kernel command line needs to be placed in the variable GRUB_CMDLINE_LINUX_DEFAULT in the file /etc/default/grub. Running update-grub appends its content to all Linux entries in /boot/grub/grub.cfg. nano /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_TIMEOUT=5 GRUB_DISTRIBUTOR="Proxmox Virtual Environment" GRUB_CMDLINE_LINUX_DEFAULT="quiet" GRUB_CMDLINE_LINUX="root=ZFS=rpool/ROOT/pve-1 boot=zfs" # 22.5.2020 (HJB) added to enable pcie passtrough. GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on" # Disable os-prober, it might add menu entries for each guest GRUB_DISABLE_OS_PROBER=true After changing anything modules related, you need to refresh your initramfs. On Proxmox VE this can be done by executing: update-initramfs -u -k all Finish Configuration Finally reboot to bring the changes into effect and check that it is indeed enabled. dmesg | grep -e DMAR -e IOMMU -e AMD-Vi [ 0.007731] ACPI: DMAR 0x00000000BDDAD200 000558 (v01 HP ProLiant 00000001 \xd2? 0000162E) [ 1.245296] DMAR: IOMMU enabled [ 2.592107] DMAR: Host address width 46 [ 2.592173] DMAR: DRHD base: 0x000000fbefe000 flags: 0x0 [ 2.592247] DMAR: dmar0: reg_base_addr fbefe000 ver 1:0 cap d2078c106f0462 ecap f020fe [ 2.592330] DMAR: DRHD base: 0x000000f4ffe000 flags: 0x1 [ 2.592399] DMAR: dmar1: reg_base_addr f4ffe000 ver 1:0 cap d2078c106f0462 ecap f020fe [ 2.592481] DMAR: RMRR base: 0x000000bdffd000 end: 0x000000bdffffff [ 2.592550] DMAR: RMRR base: 0x000000bdff6000 end: 0x000000bdffcfff [ 2.592618] DMAR: RMRR base: 0x000000bdf83000 end: 0x000000bdf84fff [ 2.592686] DMAR: RMRR base: 0x000000bdf7f000 end: 0x000000bdf82fff [ 2.592755] DMAR: RMRR base: 0x000000bdf6f000 end: 0x000000bdf7efff [ 2.592823] DMAR: RMRR base: 0x000000bdf6e000 end: 0x000000bdf6efff [ 2.592892] DMAR: RMRR base: 0x000000000f4000 end: 0x000000000f4fff [ 2.592961] DMAR: RMRR base: 0x000000000e8000 end: 0x000000000e8fff [ 2.593030] DMAR: RMRR base: 0x000000bddde000 end: 0x000000bdddefff [ 2.593108] DMAR: ATSR flags: 0x0 [ 2.593185] DMAR-IR: IOAPIC id 10 under DRHD base 0xfbefe000 IOMMU 0 [ 2.593254] DMAR-IR: IOAPIC id 8 under DRHD base 0xf4ffe000 IOMMU 1 [ 2.593324] DMAR-IR: IOAPIC id 0 under DRHD base 0xf4ffe000 IOMMU 1 [ 2.593396] DMAR-IR: HPET id 0 under DRHD base 0xf4ffe000 [ 2.593467] DMAR-IR: x2apic is disabled because BIOS sets x2apic opt out bit. [ 2.593468] DMAR-IR: Use 'intremap=no_x2apic_optout' to override the BIOS setting. [ 2.594425] DMAR-IR: Enabled IRQ remapping in xapic mode [ 4.286848] DMAR: dmar0: Using Queued invalidation [ 4.286932] DMAR: dmar1: Using Queued invalidation [ 4.355658] DMAR: Intel(R) Virtualization Technology for Directed I/O [ 111.511173] vfio-pci 0000:03:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor. [ 151.942005] vfio-pci 0000:0d:00.0: DMAR: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor. root@pve:~# The bottom 2 lines vfio-pci 0000:03:00.0 and vfio-pci 0000:0d:00.0 are the onboard P420i and the LSI Logic SAS 9207-8i Controller one of them we would like to be able to pass-through. The message Device is ineligible for IOMMU is actually from the bios of the HPE Server. So it seems like Proxmox is getting ready but the Server is still using it in IPMI / ILO. A Hint found online mentioned to add a Parameter allowing unsafe Interrupts to the Grub Command line so let’ s try it out. Changing the Grub file once more with nano nano /etc/default/grub # /etc/modules: kernel modules to load at boot time. # # This file contains the names of kernel modules that should be loaded # at boot time, one per line. Lines beginning with "#" are ignored. # 22.5.2020 (HJB) Added “.allow_unsafe_interrupts=1” to enable PCIE Passtrough vfio vfio_iommu_type1.allow_unsafe_interrupts=1 vfio_pci vfio_virqfd After reboot the “device is ineligible” line has disappeard when checking with dmesg | grep -e DMAR -e IOMMU -e AMD-Vi Yet as we tried to pass-through the LSI Controller we’re still confronted with QEMU aborting starting the VM with exitcode 1. It did the same with the P420i as we tried that as well. So I removed the entry of .allow_unsafe_interrupts=1 and rebooted once more A Search online for the Text “Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor” brought us to this HPE Document ID: c04781229 describing problems occurring when passing through PCIE devices (in the document its GPU’s). At first I understood just about nothing from the document. But together with the Blog post of Jim Denton it started to make sense. (Thank you Jim for taking the time to post it). After studying the Blogpost and the HPE Document it became clear to me that PCIE forwarding is not possible for the internal P420i Controller but is very well possible for physical PCIE Slots. As I have an LSI HBA Controller as well this seemed to be promising From here on it went relatively smooth. Just follow the next Steps. We need to add the HPE Scripting utilities. So we add the HPE repository to either Enterprise or Community Source list. nano /etc/apt/sources.list.d/pve-enterprise.list or nano /etc/apt/sources.list Add a line with deb https://downloads.linux.hpe.com/SDR/repo/stk/ xenial/current non-free Now we add the HPE Publickey’s by executing curl http://downloads.linux.hpe.com/SDR/hpPublicKey1024.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpPublicKey2048_key1.pub | apt-key add - curl http://downloads.linux.hpe.com/SDR/hpePublicKey2048_key1.pub | apt-key add - apt update And can install the scripting utilitys by apt install hp-scripting-tools Now we download the inputfile for HPE’s Conrep script. Just to make sure wher everything is I switched to the /home dir. cd /home wget -O conrep_rmrds.xml https://downloads.hpe.com/pub/softlib2/software1/pubsw-linux/p1472592088/v95853/conrep_rmrds.xml We’re nearly there now hang on. We need to know of course in which PCIE Slot our Controller is. We use lspci to list all our PCIE devices to a file and nano to scroll through to find it. lspci -vvv &> pcie.list nano pcie.list In our case we found this. Our LSI Controller with PCIE Decice ID 0000:0d:00.0 as Proxmox knows it is in Slot 4 0d:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 (rev 05) Subsystem: LSI Logic / Symbios Logic 9207-8i SAS2.1 HBA Physical Slot: 4 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- So we can create an exclude for that one by cd /home nano exclude.dat Add the following line in that and save it. <Conrep> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> </Conrep> Now we’re ready to Rock and Roll, sorry about that. I Mean to run the conrep utility from HP which excludes our PCIE Slot from the IOMMU / RMRR of the ILO/IPMI conrep -l -x conrep_rmrds.xml -f exclude.dat And we verify the results by conrep -s -x conrep_rmrds.xml -f verify.dat nano verify.dat Now we should see something like this. Mind that at Slot4 it says Excluded. <?xml version="1.0" encoding="UTF-8"?> <!--generated by conrep version 5.5.0.0--> <Conrep version="5.5.0.0" originating_platform="ProLiant ML350p Gen8" originating_family="P72" originating_romdate="05/24/2019" originating_processor_manufacturer="Intel"> <Section name="RMRDS_Slot1" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot2" helptext=".">Endpoints_included</Section> <Section name="RMRDS_Slot3" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot4" helptext=".">Endpoints_Excluded</Section> <Section name="RMRDS_Slot5" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot6" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot7" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot8" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot9" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot10" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot11" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot12" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot13" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot14" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot15" helptext=".">Endpoints_Included</Section> <Section name="RMRDS_Slot16" helptext=".">Endpoints_Included</Section> </Conrep> Time to reboot the Proxmox Server for the last time before we can celebrate. Adding a PCIE Device to the Freenas VM and select our PCIE Device ID that is the LSI controller That was all the VM is now happily starting with the forwarded Controller Since then I’ve also used this procedure to forward a Intel Optane 900p PCIE SSD Best regards Henk
  25. schoondoggy

    Gen10+ Anyone got one yet?

    I do not have one, but most review I have read bemoan the fact that there are no spare SATA ports or M.2 slots on the system board. The BIOS does appear to support bifurcation so you should be able run two NVMe drives on a fairly cheap card that has no PCIe switch: https://www.servethehome.com/hpe-proliant-microserver-gen10-plus-ultimate-customization-guide/3/
  26. The Rona lockdown boredom is really kicking in now, I've thought about upgrading my PC, but given I only did that in February, doing it again seems a bit pointless, especially given new Zen 3 CPU's are coming soonish, as are new GPU's from AMD and NVidia.. So, I find my attention being drawn to the server... My existing server is built around the Node 804, so isnt small. I've got an i5-3470, 32Gb RAM, 4x3.5" drives, 2x2.5" drives and a quad port NIC.. So, the 10+ comes with 4 NIC ports anyway, so I wont need a card for that, but I'm thinking that I wont get all of my drives in, specifically the 2x2.5" SSD's that I use currently, one for the OS and one as VM storage. So, has anyone actually got hands on one of these things yet and can confirm that we're gonna be stuck for internal space to hide drives like we could on the Gen8?
  27. nrf

    Started photography

    great! welcome! enjoy! you should find some resources here although I can't believe this is the 'best' forums for photography. but do look around in this topic!
  28. bruno_rio021

    HP Proliant ML350p G8 Watercooling!

    Researching a lot on the internet, looking for improvements that I can make in my ML (or excessive noise bothers me a lot), I found this excellent topic. I'm from Brazil and here some things are hard to find, like Noctua's 2011 coolers. With the amount charged for 1 of the Noctua NH-U9DX i4 here in BR, it is possible to buy here with the same value, 2x Cooler Master Hyper 212 Black Edition and 2x Noctua NF-A4x20, for the redundant PSU's. I'll start tracking and wait for my purchases to arrive to start my mods.... Sorry for my bad English and thank you for sharing the knowledge you have acquired throughout the process.
  1. Load more activity
×
×
  • Create New...