Jump to content
RESET Forums (homeservershow.com)

Leaderboard

  1. schoondoggy

    schoondoggy

    Moderators


    • Points

      13

    • Content Count

      8,964


  2. Trig0r

    Trig0r

    Moderators


    • Points

      13

    • Content Count

      1,416


  3. sandwern

    sandwern

    Members


    • Points

      6

    • Content Count

      11


  4. radaxian

    radaxian

    Members


    • Points

      5

    • Content Count

      13



Popular Content

Showing content with the highest reputation since 06/10/2020 in all areas

  1. 5 points
    Ok, it's time to solve all problems with overheating. Stuff: 1x Noctua NF-A8-5V fan with USB power adaptor cable 1x Akasa 3D Fan Guard AK-FG08-SL https://imgur.com/a/7npqgdc Now LSI 9271-4i temperature approximately no more than 57 degrees.
  2. 3 points
    Thanks for the reply. It turns out I’d bent a few pins on the CPU socket. God only knows how!! I managed to carefully bend the pins back and I’ve managed to boot up with RAM slots 1 and 2 populated. All seems to be running ok for now. Spot the bent pins in the pic 😬
  3. 2 points
    Yes, for me the RAM was the limiting factor as well. I moved onto a DL360 Gen8. Issue with the MS is even if you can fit a mini-ITX board into the case with another low-profile CPU fan, you'll have to do some heavy modding to the I/O area as it's not removable. I tried to do something similar with the HP Z820 workstation because the case is really quite nice. Unfortunately the amount of modding required was beyond my skill and the effort didn't seem to be worth it.
  4. 2 points
    TDP of 69W is very high, even with the 65W replacement cooler from HP that costs a fortune, so I personally wouldn't use that CPU.
  5. 2 points
    It appears that should work, as it is unbuffered and ECC The MS Gen8 does not require HP branded memory.
  6. 2 points
    Sorted it. I created a UEFI boot USB and it worked.
  7. 2 points
    I have run four consumer grade SSD on it.
  8. 2 points
    Hi all, Here's a guide I would like to share around Windows Storage spaces and creating a 4x drive Parity pool In a nutshell I have Windows Serer 2019 and storage space parity pool running very nicely on my Gen8. Here's the configuration I used and How to copy my setup. (I still believe ZFS or UnRAID are far better choice as a filesystem on these limited servers, but if you need Windows like I do, then storage spaces can be an excellent alternative.) This is my "best effort" guide and by no means perfect. It does however yield excellent results for both read and write speeds. Gen8 Microserver 16GB RAM CPU Stock for now (1270 V3 on it's way) Disks 4x 3TB WD NAS drives in front bays SSD - Samsung Evo 850 265 First lesson, DONT use the Windows GUI to create the pool or Virtual disk as the GUI applies terrible defaults that you can't edit and will ruin performance. Also make sure you're on the latest version of Windows server as a LOT has changed and been improved recently. You must use PowerShell. Terms: PhysicalDiskRedundancy - Parity Columns - 4 (The data segments stripped to disks. Should match your 4 disks) Interleve - 256K (The amound of data written to each "column" or disk. In this case 256KB interleave gives us a 64K write to each disk) LogicalSectorSize - 4096 PhysicalSectorSize - 4096 REFS/NTFS Cluster - 64K Overall configuration: 4 drive file system, one bootable SSD in RAID mode. BIOS setup initial F9 into the BIOS and set the B120i controller into RAID mode F5 into the RAID manager and create 1 individual RAID0 logical drive for the SSD Set the SSD as the preferred boot drive (Yes in the same screen) Set the cluster size to 63 Enable caching Windows install Install Windows 2019 Server Standard GUI edition from ISO Offer up the B120i RAID drivers via a USB stick so the wizard can see the SSD RAID0 drive. Filename p033111.exe (Have them extracted) Windows update and patch and reboot BIOS setup post windows Once windows is up and running go back into the F5 RAID manager and finish the setup of the 4 front drives into 4x RAID0 Check the SSD is still set as the preferred boot drive (Yes in the same screen) Set the cluster size to 63 Windows config of storage spaces At this point you should see 4 individual drives ready to be used as a Storage pool Try to set each disk to have a cache (Not all drives support this) Win + X to open the side menu Device Manager Expand Disk Drives Right Click the "HP Logical Volume" for each drive Check - "Enable write caching on the device" (If it doesn't work don't stress, it's optional but nice to have) Powershell - Run as Admin Determine the physical drisks available for the pool we're about to create Get-PhysicalDisk | ft friendlyname, uniqueid, mediatype, size -auto Your output will look something like this, so identify the 4 drives that are the same and take note of their uniqueID Mine are the bottom four drives all 3TB in size friendlyname uniqueid size ------------ -------- ---- SSD HP LOGICAL VOLUME 600508B1001C5C7A1716CCDD5A706248 250023444480 HP LOGICAL VOLUME 600508B1001CAC8AFB32EE6C88C5530D 3000559427584 HP LOGICAL VOLUME 600508B1001C51F9E0FF399C742F83A6 3000559427584 HP LOGICAL VOLUME 600508B1001C2FA8F3E8856A2BF094A0 3000559427584 HP LOGICAL VOLUME 600508B1001CDBCE168F371E1E5AAA23 3000559427584 Rename the friendly name based on the UniqueID from above and set to "HDD type" Set-Physicaldisk -uniqueid "Your UniqueID" -newFriendlyname Disk1 -mediatype HDD You will need to run that 4 times with each UniqueID code and create a new friendly name for each drive. I called mine "Drive 1, Drive 2" etc Set-Physicaldisk -uniqueid "600508B1001C2FA8F3E8856A2BF094A0" -newFriendlyname Disk1 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CDBCE168F371E1E5AAA23" -newFriendlyname Disk2 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CAC8AFB32EE6C88C5530D" -newFriendlyname Disk3 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001C51F9E0FF399C742F83A6" -newFriendlyname Disk4 -mediatype HDD Verify the disks have been set correctly The following example shows which physical disks are available in the primordial server and CAN be used in the new Pool. You're just checking here if the friendly name renaming worked and they are all set to HDD type. Primordial just means on your local server and available. Get-StoragePool -IsPrimordial $true | Get-PhysicalDisk | Where-Object CanPool -eq $True You should see your four drives with nice names that you set like "Disk1" Now find out your sub system name, as we need this for the next command. Just take note of it. Example "Windows Storage on <servername>" Mine is ""Windows Storage on Radaxian" Get-StorageSubSystem The following example creates a new storage pool named "Pool1" that uses all available disks and sets the cluster size. New-StoragePool -FriendlyName Pool1 -StorageSubsystemFriendlyName "Windows Storage on Radaxian" -PhysicalDisks (Get-PhysicalDisk -CanPool $True) -LogicalSectorSizeDefault 64KB Now create the Virtual Disk on the new pool with 4x disks and Partity set correctly. (This is critical to do via PowerShell) New-VirtualDisk -StoragePoolFriendlyName "Pool1" -FriendlyName "VDisk1" -ResiliencySettingName Parity -NumberOfDataCopies 1 -NumberOfColumns 4 -ProvisioningType Fixed -Interleave 256KB -UseMaximumSize Those two commands should complete without error, if they don't go back and check your syntax Go back into the Windows GUI and open this Server Manager\File and Storage Services\Servers You should see the Storage pool listed and the Virtual disk we created in the previous steps. Storage pool - Pool1 Virtual Disk - VDisk1 Select Disks in the GUI Identify your new VDisk1 and right click it. Set to Online, this will also set it to use a GPT boot record On the same screen in the below pane Volumes Click TASKS and select "New Volume" Select REFS and Sector size of 64K Enter a volume name like "Volume1" or whatever you want to call it Select a drive letter such as Z (You can use NTFS here for slightly better performance, but I'm sticking to REFS as it has some benefits) You'll now have a Storage pool, Virtual disk on top and a volume created with optimal settings Go back into Power Shell Enable power protected status if applicable (Just try it, no harm) (Ideally here you should have your server connected to a basic UPS to protect it from power outages) Set-StoragePool -FriendlyName Pool1 -IsPowerProtected $True Check if the new sector sizes of Virtual disk and all relevant settings are correct Get-VirtualDisk | ft FriendlyName, ResiliencySettingName, NumberOfColumns, Interleave, PhysicalDiskRedundancy, LogicalSectorSize, PhysicalSectorSize Example output FriendlyName ResiliencySettingName NumberOfColumns Interleave PhysicalDiskRedundancy LogicalSectorSize PhysicalSectorSize VDisk1 Parity 4 262144 1 4096 4096 You're done.... enjoy the new Volume. At this point you can share out your new Volume "Z" and allow client computers to connect. Some other commands in Power Shell that I found useful Get more verbose disk details around sectors. Get-VirtualDisk -friendlyname Vdisk1 | fl Get-PhysicalDisk | select FriendlyName, Manufacturer, Model, PhysicalSectorSize, LogicalSectorSize | ft Check if TRIM is enabled. This output should be 0 fsutil behavior query DisableDeleteNotify If TRIM is not enabled, you can set it on with these commands fsutil behavior set disabledeletenotify ReFS 0 fsutil behavior set disabledeletenotify NTFS 0 Check the Power Protected status and cache Get-StorageAdvancedProperty -PhysicalDisk (Get-PhysicalDisk)[0] Once your data has been migrated back to your new pool from backup, make sure you run this command to "spread out the data" properly. This command rebalances the Spaces allocation for all of the Spaces in the pool named SQLPool. Optimize-StoragePool -FriendlyName "Pool1" I'm yet to get my Xeon in the mail, but once that's installed I think the disk performance will go up even higher as the stock CPU is junk.
  9. 1 point
    Hi E3000, No, I've had it 10 months now and it has had no affect on the internal case fans at all. Pete
  10. 1 point
    Recommendations are always useful. I was quite impressed with their site when looking for the Gen10+. It was good to see accessories for a device listed under the main listing so you can easily add them to a single basket rather than running around all over the place. For example: Microserver and Accessories They have also emailed me to keep me informed about the expected deliver date. When I originally placed the order they were due in on the 26/06.
  11. 1 point
    I have not played with Xpenology and ESXi together for a long time, but: 1.Unless something has changed Xpenology does not like RAID cards. HBA's work well and allow Xpenology to configure and manage the drives. 2. In the past the P222 did not behave well as a HBA. I think the latest firmware from HPE is better at supporting HBA mode, but I have not tested it. Xpenology users seem to have good luck using LSI based HBA's in passthrough with ESXi. 3. For the onboard controller, you could add a breakout cable to the system board and use those ports for SSD's. ESXi tends to like RAID controllers, but there have been performance issues with the B120i drives and ESXi in the past. Some have had good luck with AHCI. Sorry I do not have more definitive answers, but these are the areas I would research. Perhaps someone else will have more detail.
  12. 1 point
    Thanks again for replying. Indeed, in the end I bought it. I'm waiting for it to be dispatched from Germany and I paid much more than that. It was about 200 euros on the HPE website. I found this supplier who sold it a 150. God, I hope it was worth it. I was unconfortable with having only 2 screws, the fact that it did not fit precisely made me wonder wether the round surface of the cooler was touching the processor in the right way, as you suggested, but it's difficoult to see from a side view. The compound seemed to adhere but nontheless, let's see if the official HP part even without fan can yield better results. Thank you very very much for your help and suggestions.
  13. 1 point
    I'm going to start putting money aside to do a build for my new home server in December. I'll be going for the base model putting in an E-2246G, 64GB RAM and transferring my drives over from my Gen8 before probably offloading my two Gen8s but I'll keep my ML110 G7 as my backup as its got my LTO drive.
  14. 1 point
    Can I ask a simple question? Did you remove and replace the thermal paste? It needs to be fresh paste and spread thinly over the whole flat metal area. The screws should squash it down and spread it out but a quick visual check will let you know if the paste is making good contact to both the CPU and heatsink. The ILO temp sensor is slow to respond and seems a bit innacurate from my use. Core Temp seems more accurate with the chipset in this Gen8 HWMonitor is ok and should read the same as Core Temp
  15. 1 point
    From your testing it looks the issue is with slot 2. The memory controller is part of the CPU. It is possible that some debris got on the pins of the CPU and is causing slot 2 not to work. Check the pins and the pads to be sure they are clean.
  16. 1 point
    Thanks very much... my Gen10+ now has 64GB or memory and is happily running ESXi 😎
  17. 1 point
    I do not have a MS Gen10 Plus. It appears that the current bifurcation function on the PCIe x16 slot splits it into x8+x8. For cards like AOC-SLG3-2M2, bifurcation would need to be x4+x4+x4+x4. https://community.hpe.com/t5/proliant-servers-netservers/microserver-gen10-plus-pcie-bifurcation-support-issue/td-p/7092453#.XvdxqGhKguU
  18. 1 point
    Thats certainly one way of doing it..
  19. 1 point
    This is what I found on the Unraid Community Forum.... "The M1015 is a PCIe v2 x8 card ... so it's designed for 8 lanes at 500MB/second/lane. If you connect it to an x4 slot, then as long as the motherboard supports PCIe v2 (or later) the card will have 2GB/s of bandwidth available with 4 lanes. Clearly that's enough for 8 drives => so the answer is you won't notice any degradation in performance. However, if the motherboard has PCIe v1 slots, the total bandwidth for 4 lanes is only 1GB ... which works out to 125MB/s/drive for 8 drives => this will clearly result in degraded performance when all drives are active at once."
  20. 1 point
    Could be a bottleneck, but given its only going to be Plex, how many concurrent streams are you going to be pulling, whatever that number is its unlikely you're going to be pulling enough from each drive to actually cause an issue. If you're bothered about NIC's, do you have USB3? Could always drop a dual USB3 NIC on there.
  21. 1 point
    I paid retail for my CPU and the Heat pipe heatsink, I've never actually added up what my Gen8 cost me over the years, probably for the best..
  22. 1 point
    That is a ridiculous amount indeed!
  23. 1 point
    Same situation here, it took to make a phone calls to several retailers and only one shop agreed to deliver P13788-B21 by pre-order in 2 weeks. So maybe better to pre-order iLO kit somewhere if possible, it will save money.
  24. 1 point
    I'd probably just get mine from my guy at CCS Media..
  25. 1 point
    @sandwern: Thanks buddy! I'll order mine tomorrow. Appreciate your help.
  26. 1 point
    Looking at my drives, I tend to have purchased HGST. Although they are owned by WD they had their own designes and factories. Dont forget about Toshiba drives. Toshiba always built good drive. When WD bought HGST they had to sell some of HGST tech and factories to Toshiba. There seems to be a good deal of similarity between Toshiba Enterprise and NAS drives to the previous HGST designs. The issue of course is you dont see Toshiba go on sale.
  27. 1 point
    every time I see discussions about these servers using ECC memory, I wonder how much trouble ECC has saved us from. I for one have had only a single instance of a non-correctable ECC failure with multiple systems over multiple years, so I wonder if there is a way to tell how many times the memory was silently corrected.
  28. 1 point
    Having raised 3 now adult daughters, two of which are gamers, a good family pc strategy is to consider. "Handme downs " in making new pc purchases. I focused on compatibily as much as possible. When I upgraded, i was able to use the leftover stuff else where.. a couple of years ago, i went from 8 to 16 gigs on my box. I was able to use my leftover 4 gig sticks to also upgrade another kids machine to 16 gigs. Ditto for things like cases, ssd's Going too far in either direction (slim builds, eatx big build) reduce the ability to do this Hell, now my son in laws look forward to my hand me downs!
  29. 1 point
    Yes, ESXi 7 (the HPE one) can be booted from the internal USB port and use a portable ssd device via the USB 3.1 port as the datastore (after some standard tweaks).
  30. 1 point
    I built a semi small quiet rig with a fractal design nano s w/ mini itx board and intel 8400 back in 2018, if building today I personally would go AMD...though Im more of an intel fan, atleast amd making good processors again forces intel to get off their rear but you cant deny the value of amd and multitasking power
  31. 1 point
    I would prefer AMD, but AMD mini ITX board choices are limited. True, the obsession with finding the smallest case, likely creates a whole set of other issues. The Cooler Master 130 is a great case. I have also been looking for an opportunity to try one of these: https://www.newegg.com/white-silverstone-sugo-series-mini-itx-tower/p/N82E16811163232
  32. 1 point
    WHSv1 may still work, but I'm not sure you'd want it to in this time and age. I recently shuttered my WHS2011 because I ran into some issues: 1. With every release of Windows 10 updates, I've seen the gradual removal of the functional integration with WHS2011. Things like, the Connector agent on some clients would just one day stop connecting to the server. On the Dashboard, clients would show up as offline, even though they're not. I would imagine that WHSv1 would behave worse with Win10 2. On the Dashboard, backups would randomly disappear and show as "Unavailable" 3. The last time I tried to restore a Windows 10 PC from backup was last year, and it just wouldn't let me (I have forgotten the details but I had to forego the restore and instead, resorted to installing a clean copy of Win10) 4. WHSv1 and WHS2011 were based off Windows Server 2003 R2 and Windows Server 2008 R2, respectively. Both are EoL/EoS legacy software and are considered inherently unsecure. Swiss cheese comes to mind. Sadly, their glory days are over. Then again, YMMV. Now, I'm not telling you to stop using it. Just laying down the issues you might have to tackle down the road. But if you must use it, consider these suggestions: 1. Instead of WHSv1, you can try moving up to WHS2011, which has a 25-device limit and notch better secured. The only drawback with WHS2011 is that it dropped Drive Extender. You'll need a 3rd party app to do that, like Division M's Drive Bender or StableBit's Drive Pool. 2. Or just use a more modern OS. In fact, Windows 10 already has Storage Spaces. All you need is a centralized backup solution. There are a few free software to choose from like Veeam or UrBackup. But unlike WHS, there's a bit of a learning curve to brave through.
  33. 1 point
    Build looks ok, I'd be tempted to go for the 304 case over the 202 just to give you more room but still be ITX. Given AMD and NVidia are meant to be dropping new GPU's later in this year you could get some decent deals on something used, depends where you are on warranty or waiting etc.
  34. 1 point
    Checking out the specs, your idea will very likely work. The TV has the right outputs that matches the input on the Echo Studio. However, one thing I can't answer is how the volume control will work after the integration. Some TVs have fixed-level line audio outputs and relies on the external amplifier or powered speaker (ie. Echo Studio) for volume control. In other words, depending on how the TV output is wired, integrating it with the Echo Studio might render the TV remote useless in adjusting the volume. I've read somewhere that you'll need a Fire TV, including its remote, to be able to control the Echo Studio's volume, in addition to adjusting it manually on the Studio or ask Alexa to do it for you. It's a little cumbersome. Apart from the potential issue I've mentioned, I imagine the sound quality (Dolby Atmos) will blow you away. Another option is to just get a purpose-built 2.1 or 5.1 TV soundbar.
  35. 1 point
    CURRENT AS OF JUNE 2016 Tested on HP Microserver Gen 8 - BIOS J06 11/02/2015 This is a small write-up to those newcomers who want to know "it-all" about the unfamous fan speed, related to the AHCI and B120i RAID setting. Most of it is known and can be found in this forum. One detail is new (maybe). I hope the write-up is correct, but i can't be sure! Thanks for all the info from previous posters. Minimum fan speed is different depending on the choice for AHCI (minimum around 14%) or RAID settings (minimum around 6%). Some say the difference is quite small, others feel the lower speed is preferable. In order to get the lowest fan speed you have to arrange ALL of the following: Use the B120i controller mode Load an operating system including the official HP drivers, especially the AMS (Agentless Management Service) Place a least 1 (one) disk in a RAID0 array Use at least 1 (one) disk in an array with uses the SMART temperature attribute 194. Note: Attribute 190 does not work, example: Samsung 830 SSD. If all this is is arranged a sensor value appears in iLO: HD max. HD max seems to be the highest value (temperature) of the SMART attribute 194 of all disks available. In this case the fan speeds are managed starting with the lower value of 6%. If anything misses, fan speed management starts around 14%. The exact mechanism is now know. It is reasonable certain the ASM driver returns the HD-max value, because without ASM driver it doesn't work. The disk temperature data can be returned to either iLO or to BIOS to manage fan speed. If it's to iLO this could be checked by disabling iLO and checking the fan speed. Maybe the fan can still be monitored from within the operating system, maybe not. Please note iLO permanently consumes about 5W so maybe someone is interested in this experiment. You can check disk temps during RAID setup. If there is a correct SMART 194 attribute, it shows up at the physical disk info. ONLY during RAID configuration you can check individual disk temperatures. After booting an OS, including the HP drivers, only the HD-max info is available. Please note again that the Microserver only uses the SMART attribute 194 and not 190 which is also a temperature attribute.
  36. 1 point
    Mine has the E3-1230v2 and runs 4 x Server 2019 (GUI) domain controllers (2 x separate domains) and a Windows 10 based iTunes server 24x7. In addition to those I can spin up another couple of machines if needed before I start to run out of memory.
  37. 1 point
    https://www.altaro.com/hyper-v/hyper-v-dynamic-memory-explanation-and-recommendations-2/ try that as well
  38. 1 point
    Tikanderoga

    RAM

    For those looking: I have ordered these ones: https://www.pclan.com.au/kingston-16gb-2400mhz-ddr4-ecc-ksm24ed8-16me They work perfectly fine and are also compatible with the shipped 8GB from HP. Server is now running on 24GB of Ram.
  39. 1 point
    The most I've run were 13, a mix of 5xOpenBSD (which can run with as little as 192Mb of Ram), 2xLinux (Ubuntu & Centos), 1xFreeBSD, and 4xWindows Server Core 2019 and 1xWin10pro. It does get slow but it's more or less usable for most things. Win10 is just there to run the WSUS MMC so it's idle a lot, as are the Server Core VMs which do AD, Windows Admin Center and WSUS. One of the OpenBSD VMs did package building duties, which makes the rest of the VMs suffer quite a bit. Smaller VMs all got 1 vCPU, only the WSUS, Win10 and OpenBSD builder VMs had 2 or 4 depending on duties. I could run MiniDLNA for streaming, filesharing over NFS and Zabbix monitoring with it all just fine. The 1265v2 is a real necessity to get enough threads!
  40. 1 point
    If genuine, PC WORLD seem to have a good deal for the month of June. https://www.pcworld.com/article/3562330/exclusive-deal-get-windows-10-pro-for-39-99-thats-80-off-retail.html The price of 39.99 is nice for a retail - not OEM - license.
  41. 1 point
    Thats not strictly true, I've built 3, one of them, that was mine ended up with drives SDM's, raid cards, CPU, Heatsink etc upgrades, but the other 2 are still rocking the stock Celerystick with 6Gb and 8Gb RAM, they both run Win10 and just run FileZilla and Plex.
  42. 1 point
    That is what your (second longer) HBA card will give you. I have not had any increase in fan speed / noise and the ILO4 PCI-1 temp is just 44C (max is 72C) whilst fan speed is 19% using the 'Dynamic Power Savings' setting, same as the ML310e. I've not needed to try the 'OS Control Mode'. Before I upgraded my Microserver, I too saw the Stutsman (Joe Miner) videos on heatsinks ... which were easy to add and effective ...and on extra cooling fans, which turned out not to be necessary adding the HBA card, unless you are also adding the extra 4 disks on the side of the PSU using the Schoondoggy bracket and using a 4i-4e card (instead of the 4i you need) which would draw extra power.
  43. 1 point
    That looks identical to the one I bought. Speedpak delivery from China is quicker and with good tracking as it seems to be air freight. Yes, I can now see SMART data which I could not while using B120i for the HDD cage. A side note, I also transitioned from B120i to AHCI and the OS HDD was not recognised by Win 10. I could not find any way round this until in desperation, I tried my backup Win 10 OS drive from my AHCI gen 8 ML310e and it worked perfectly !! W 10 install on the AHCI ML310e was seamless with no drivers needed, so I guess B120i and its firmware / driver is doing something proprietary / unfriendly.
  44. 1 point
    Well success! I was able to source a half height mini Pcie sata adapter from ebay - the vendor turned out to be less than an hour away in Indy. any way the card installed just fine and booted up in Windows 10. It is recognized by the bios without a driver and even allows booting from this adapter. Transfer speed is actually better than what I experience with the X4 PCI express card in the only slot on the board ~80mb/S versus 60 mb/s The only possible snag - it needed a 1.6 mm x 10 mm bolt - the card didn't come with one. Fortunately, I had one in my shop You can see how the on board sata port limits the size of the card you can install. You can get a 4 port card and even video capture card but only in full height card. This was $14 bucks delivered off ebay - you can get these from amazon for $30-40 bucks. so If you have a M/B with an unused Mini PCIe slot , you now have some options
  45. 1 point
  46. 1 point
    I changed the processor and the event was resolved. Thanks everyone.
  47. 1 point
    Got the answer on other forum. " You can't access a drive from two different OS'ses simultaniously. If you change on one side, the other side does not recognize the update because it is not aware of the possibility, that someone else uses the same disk. This is why we have cluster aware Filesystems in environments, where multiple systems need to access shared data - or, they use NFS... You would need to unmount the drive on one side, then mount it again on the other side. Then the change should show up. They way you are doing it now - wont work. I would suggest you rethink your approach by using NFS exports, which would allow what you try to achieve. E.g. use the disk in system A, export the path via NFS and mount the NFS-export from system B. This can be Read-Write or Read-Only, depending on your need. "
  48. 1 point
    It's not a server unless it sounds like a wind tunnel
  49. 1 point
    The P600 came today. I've installed it and made all the changes that you mention in this thread. It works with Plex so I'm happy - when you are transcoding in Plex it confirms that it is hardware accelerated. I also found a patch to remove the 2 concurrent encoding limitation. Glad I read this thread now.
  50. 1 point
    The process detailed in the above URL worked like a charm on my HPMSG8. It would have been nice to avoid having to use Windows, but a few minutes booting up my W10P VM to prepare the USB stick didn't hurt too badly... FWIW, -MB
This leaderboard is set to Indiana - Indianapolis/GMT-04:00
×
×
  • Create New...