Jump to content
RESET Forums (homeservershow.com)

Leaderboard

Popular Content

Showing content with the highest reputation since 09/02/2019 in all areas

  1. Ok, it's time to solve all problems with overheating. Stuff: 1x Noctua NF-A8-5V fan with USB power adaptor cable 1x Akasa 3D Fan Guard AK-FG08-SL https://imgur.com/a/7npqgdc Now LSI 9271-4i temperature approximately no more than 57 degrees.
    6 points
  2. As I do not plan to build any more of the SDM Rev4 brackets, I have decided to share the schematic. https://1drv.ms/u/s!Ao9ObX1BCpuXsstkeE7Jc5cyRmYTFQ?e=8ouDcP Edit- This is a Visio file. If you download it and open it Visio you will see the scale of the drawing.
    5 points
  3. iLO4 v.2.78 available. HPE requires users update to this version immediately. https://support.hpe.com/hpesc/public/km/product/5390291/Product#t=DriversandSoftware&sort=%40hpescuniversaldate descending&layout=table&numberOfResults=25
    4 points
  4. I must say this is a life-saving source of useful information about the Gen8 MicroServer! And speaking as someone resurrecting their Gen8 with a rebuild, it has proven to be extremely useful. My thanks to all who have added and curated this list of links and details. 🙂
    4 points
  5. ***CRITICAL*** iLO 4 update available here. HPE requires users update to this version immediately. Potential vulnerabilities in network stack. VULNERABILITY SUMMARY A potential security vulnerability has been identified in Integrated Lights-Out 5 (iLO 5), Integrated Lights-Out 4 (iLO 4), and Integrated Lights-Out 3 (iLO 3) firmware. The vulnerability could be remotely exploited to cause memory corruption. HPE has released updated firmware to mitigate these vulnerabilities. References: CVE-2020-27337
    3 points
  6. What we've learned about the WD red fiasco is that you should be wary of weasel words in the product description and specs '5400 performance class' versus ' drive speed 5400 rpm' Here are excerpts of datasheets from two similar m2 SSD's from Samsung and from ADATA. what's missing from the ADATA that is present in the Samsung document? a description of the actual components ie controller type, Nand and dram type ADATA fails to describe what you are getting. its a marketing brochure, not a spec sheet
    3 points
  7. 3 points
  8. Hi, posting other files than images is not possible in this forum. Therefore I have posted them at thingiverse: https://www.thingiverse.com/thing:4578946 Thanks again for sharing.
    3 points
  9. Have you seen the Serve The Home Guide? https://www.servethehome.com/hpe-proliant-microserver-gen10-plus-ultimate-customization-guide/ there is a lot of information in there. The server also is not limited to 32GB incase you weren’t aware, i’m running 64GB in mine.
    3 points
  10. Thanks for the reply. It turns out I’d bent a few pins on the CPU socket. God only knows how!! I managed to carefully bend the pins back and I’ve managed to boot up with RAM slots 1 and 2 populated. All seems to be running ok for now. Spot the bent pins in the pic 😬
    3 points
  11. Are you apologetic because you are truly sorry or that you got caught? - Said every Mom on the planet.
    3 points
  12. Hi all, Here's a guide I would like to share around Windows Storage spaces and creating a 4x drive Parity pool In a nutshell I have Windows Serer 2019 and storage space parity pool running very nicely on my Gen8. Here's the configuration I used and How to copy my setup. (I still believe ZFS or UnRAID are far better choice as a filesystem on these limited servers, but if you need Windows like I do, then storage spaces can be an excellent alternative.) This is my "best effort" guide and by no means perfect. It does however yield excellent results for both read and write speeds. Gen8 Microserver 16GB RAM CPU Stock for now (1270 V3 on it's way) Disks 4x 3TB WD NAS drives in front bays SSD - Samsung Evo 850 265 First lesson, DONT use the Windows GUI to create the pool or Virtual disk as the GUI applies terrible defaults that you can't edit and will ruin performance. Also make sure you're on the latest version of Windows server as a LOT has changed and been improved recently. You must use PowerShell. Terms: PhysicalDiskRedundancy - Parity Columns - 4 (The data segments stripped to disks. Should match your 4 disks) Interleve - 256K (The amound of data written to each "column" or disk. In this case 256KB interleave gives us a 64K write to each disk) LogicalSectorSize - 4096 PhysicalSectorSize - 4096 REFS/NTFS Cluster - 64K Overall configuration: 4 drive file system, one bootable SSD in RAID mode. BIOS setup initial F9 into the BIOS and set the B120i controller into RAID mode F5 into the RAID manager and create 1 individual RAID0 logical drive for the SSD Set the SSD as the preferred boot drive (Yes in the same screen) Set the cluster size to 63 Enable caching Windows install Install Windows 2019 Server Standard GUI edition from ISO Offer up the B120i RAID drivers via a USB stick so the wizard can see the SSD RAID0 drive. Filename p033111.exe (Have them extracted) Windows update and patch and reboot BIOS setup post windows Once windows is up and running go back into the F5 RAID manager and finish the setup of the 4 front drives into 4x RAID0 Check the SSD is still set as the preferred boot drive (Yes in the same screen) Set the cluster size to 63 Windows config of storage spaces At this point you should see 4 individual drives ready to be used as a Storage pool Try to set each disk to have a cache (Not all drives support this) Win + X to open the side menu Device Manager Expand Disk Drives Right Click the "HP Logical Volume" for each drive Check - "Enable write caching on the device" (If it doesn't work don't stress, it's optional but nice to have) Powershell - Run as Admin Determine the physical drisks available for the pool we're about to create Get-PhysicalDisk | ft friendlyname, uniqueid, mediatype, size -auto Your output will look something like this, so identify the 4 drives that are the same and take note of their uniqueID Mine are the bottom four drives all 3TB in size friendlyname uniqueid size ------------ -------- ---- SSD HP LOGICAL VOLUME 600508B1001C5C7A1716CCDD5A706248 250023444480 HP LOGICAL VOLUME 600508B1001CAC8AFB32EE6C88C5530D 3000559427584 HP LOGICAL VOLUME 600508B1001C51F9E0FF399C742F83A6 3000559427584 HP LOGICAL VOLUME 600508B1001C2FA8F3E8856A2BF094A0 3000559427584 HP LOGICAL VOLUME 600508B1001CDBCE168F371E1E5AAA23 3000559427584 Rename the friendly name based on the UniqueID from above and set to "HDD type" Set-Physicaldisk -uniqueid "Your UniqueID" -newFriendlyname Disk1 -mediatype HDD You will need to run that 4 times with each UniqueID code and create a new friendly name for each drive. I called mine "Drive 1, Drive 2" etc Set-Physicaldisk -uniqueid "600508B1001C2FA8F3E8856A2BF094A0" -newFriendlyname Disk1 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CDBCE168F371E1E5AAA23" -newFriendlyname Disk2 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001CAC8AFB32EE6C88C5530D" -newFriendlyname Disk3 -mediatype HDD Set-Physicaldisk -uniqueid "600508B1001C51F9E0FF399C742F83A6" -newFriendlyname Disk4 -mediatype HDD Verify the disks have been set correctly The following example shows which physical disks are available in the primordial server and CAN be used in the new Pool. You're just checking here if the friendly name renaming worked and they are all set to HDD type. Primordial just means on your local server and available. Get-StoragePool -IsPrimordial $true | Get-PhysicalDisk | Where-Object CanPool -eq $True You should see your four drives with nice names that you set like "Disk1" Now find out your sub system name, as we need this for the next command. Just take note of it. Example "Windows Storage on <servername>" Mine is ""Windows Storage on Radaxian" Get-StorageSubSystem The following example creates a new storage pool named "Pool1" that uses all available disks and sets the cluster size. New-StoragePool -FriendlyName Pool1 -StorageSubsystemFriendlyName "Windows Storage on Radaxian" -PhysicalDisks (Get-PhysicalDisk -CanPool $True) -LogicalSectorSizeDefault 64KB Now create the Virtual Disk on the new pool with 4x disks and Partity set correctly. (This is critical to do via PowerShell) New-VirtualDisk -StoragePoolFriendlyName "Pool1" -FriendlyName "VDisk1" -ResiliencySettingName Parity -NumberOfDataCopies 1 -NumberOfColumns 4 -ProvisioningType Fixed -Interleave 256KB -UseMaximumSize Those two commands should complete without error, if they don't go back and check your syntax Go back into the Windows GUI and open this Server Manager\File and Storage Services\Servers You should see the Storage pool listed and the Virtual disk we created in the previous steps. Storage pool - Pool1 Virtual Disk - VDisk1 Select Disks in the GUI Identify your new VDisk1 and right click it. Set to Online, this will also set it to use a GPT boot record On the same screen in the below pane Volumes Click TASKS and select "New Volume" Select REFS and Sector size of 64K Enter a volume name like "Volume1" or whatever you want to call it Select a drive letter such as Z (You can use NTFS here for slightly better performance, but I'm sticking to REFS as it has some benefits) You'll now have a Storage pool, Virtual disk on top and a volume created with optimal settings Go back into Power Shell Enable power protected status if applicable (Just try it, no harm) (Ideally here you should have your server connected to a basic UPS to protect it from power outages) Set-StoragePool -FriendlyName Pool1 -IsPowerProtected $True Check if the new sector sizes of Virtual disk and all relevant settings are correct Get-VirtualDisk | ft FriendlyName, ResiliencySettingName, NumberOfColumns, Interleave, PhysicalDiskRedundancy, LogicalSectorSize, PhysicalSectorSize Example output FriendlyName ResiliencySettingName NumberOfColumns Interleave PhysicalDiskRedundancy LogicalSectorSize PhysicalSectorSize VDisk1 Parity 4 262144 1 4096 4096 You're done.... enjoy the new Volume. At this point you can share out your new Volume "Z" and allow client computers to connect. Some other commands in Power Shell that I found useful Get more verbose disk details around sectors. Get-VirtualDisk -friendlyname Vdisk1 | fl Get-PhysicalDisk | select FriendlyName, Manufacturer, Model, PhysicalSectorSize, LogicalSectorSize | ft Check if TRIM is enabled. This output should be 0 fsutil behavior query DisableDeleteNotify If TRIM is not enabled, you can set it on with these commands fsutil behavior set disabledeletenotify ReFS 0 fsutil behavior set disabledeletenotify NTFS 0 Check the Power Protected status and cache Get-StorageAdvancedProperty -PhysicalDisk (Get-PhysicalDisk)[0] Once your data has been migrated back to your new pool from backup, make sure you run this command to "spread out the data" properly. This command rebalances the Spaces allocation for all of the Spaces in the pool named SQLPool. Optimize-StoragePool -FriendlyName "Pool1" I'm yet to get my Xeon in the mail, but once that's installed I think the disk performance will go up even higher as the stock CPU is junk.
    3 points
  13. Thanks for guidance on this! Just a note on the solution I settled with in the end for anyone else who might come across this thread looking to do the same thing - Tried a couple different cheaper eSATA cards with no success having the drives in ICYCube detected in CentOS. Looked into driver issues and again with no luck. The USB connection did work fine, but I wasn't overly happy with that as a solution. I instead got this HBA (10Gtek Host Bus Adapter - https://www.amazon.co.uk/gp/product/B01M9GRAUM) with Mini SAS connection, and swapped the ICYCube for an alternative external enclosure with compatible Mini SAS connection (SilverStone SST-TS431S-V2 - https://www.amazon.co.uk/gp/product/B0771S45X3). On setup this worked perfectly, with the drives properly detected. I'm actually also a little happier with the SilverStone unit in general so far, it feels of higher build quality and fan also seems a little quieter. I also decided to move away from using the B120i RAID controller, and hardware raid in general, and am now using ZFS / RAIDZ. Thanks again for help as I was working through this!
    3 points
  14. It's not a server unless it sounds like a wind tunnel
    3 points
  15. The host device has to have a 'esata-p' port to supply power to the client device. The regular esata doesnt have power. I dont have the specs on the n40l, but esata-p ports are uncommon, even back when the n40 was new I still have an external usb drive enclosre with a esata connector. Works great, but usb3 devices have largely obsoleted essta
    2 points
  16. Does your Vizio TV have an HDMI port with ARC? ARC is audio return channel. It is what allows audio from the TV to be routed to the soundbar. You would simply use the ARC port from the TV to the Soundbar input (I believe) and then connect your Blu-Ray to another HDMI input on the TV. When you view anything on the TV the sound would be routed thru the ARC port to the soundbar. Your other choice is to use an optical link between the TV an soundbar. Blu-Ray still connected to the TV. I am guessing at this point because you didn't say what you have actually tried and what the specific model of your TV is (so I am guessing what ports are available).
    2 points
  17. I've currently got an 18TB disk in mine and it works fine. As JackoUK says, I don't boot from it, just use it for storage.
    2 points
  18. Just speculating, but keywords; 'SchedulerConsole', 'HP', 'diagnostics', 'telemetrywatch' would point to an application that interacts between your printer and the manufacturer, HP. When you install most printers they ask about participating in customer use experience, sharing info with the printer vendor, toner levels, printer use, firmware version, software version and diagnostics. This is likely the scheduler trying to communicate with HP.
    2 points
  19. Once you got the drive plugged into the router, you probably need to set it up further using the admin pages of your router. I am not familiar with SRM but typically on other platforms, you need to access the admin page to add/recognize the drive, format it, and create a network file share. There may be other steps involved like adding users or setting up basic security for the file share.
    2 points
  20. A while back, I mentioned in a post that QNAP has a switch in its backup program to allow a USB drive to be ejected after a backup - This was a handy way to disconnect a backup, thus protecting it from a ransomware attack. ( or so I thought) I had wondered if there was a way to "reconnect" the drive - so I wouldnt have to go down to the server and manually turn my backup drives off and on again to reconnect. couldnt find a way at the time. Well a user on the QNAP community forum found a way, I came across this post from last fall - quote Logging in to the NAS via ssh and entering the commands: CODE: SELECT ALL echo 0 > /sys/bus/usb/devices/2-1/authorized echo 1 > /sys/bus/usb/devices/2-1/authorized makes the "ejected" USB drive reappear just as if it had been unplugged and replugged, including the start of an Auto-Backup job that has been defined for that disk, if any. Incidentally, this means that an "ejected" USB backup disk is not safe from crypto trojans. So always disconnect your backup disk physically after running a backup! unquote I tested it this evening and it works !! I had to enable SSH logins , Not a linux command line guy at all but it worked 1st time I tried it. I've since disabled SSH logins. Qnap is linux based so this may work on other systems The reason I mention this is the caution that he gave at the end. If a command line newb like me can do this, a ransomware jockey could easily "wake up" all the usb drives hooked up and left powered on your NAS. For what its worth - what I have been doing is - I put the backup drive on a smart switch ( I use TpLink Kasa). When I want to backup, I turn on the drive remotely - when it is completed and ejected from the system, I switch it off remotely. with a incremental backup , this is typically within an couple hours. I do a full "offline" backup about once a month - so I havent gotten around to automating this any further. - I guess I could use something like IFTTT or something.
    2 points
  21. I first boot into the bios and verify that all the memory and drives that should be present are recognized. also verify that the correct boot disk is selected. If this checks out, I boot into a USB stick using A light Linux distro, if it boots, check your original boot drive is readable - might be time to replace the hard drive from a hardware perspective, If I need to open the case, I will reseat every connection and test. next step would be to try a different power supply at some point, if its an arcane hardware failure, on a low end system its time to replace imho. good luck and if you find the cause let us know
    2 points
  22. This offline SSA is from 2019: https://support.hpe.com/hpsc/swd/public/detail?swItemId=MTX_40057c5bb50b4197af4afdf478
    2 points
  23. I would check all of your existing WD Reds to see if you have any that are SMR drives. SMR should not be an issue with DrivePool, but you never know. Even though WD owns Hitachi HGST I find their drives to be consistently good. Toshiba drives are good, specifically their NAS drives. Seagate NAS drives like IronWolf seems to be doing well. Best price per TB is buying WD externals and shucking them, but you can never be sure of what you will get. Depending on the drive model and the system you are putting them in, you may need to modify the 3.3v pin. I needed to add some drives to existing arrays, I picked up some 6TB SAS lightly used for $65ea and some 4TB SAS used for$35 a few months back. At some point I am sure I will get burned on used/refurbed drives, but so far so good.
    2 points
  24. Which i3 CPU are you planning to use? The MicroServer Gen8 requires ECC memory, the modules you have listed will not work. You can run four 3.5" drives in the from bays and there is an internal SATA connection to run a 2.5" drive internal. There is room for more 2.5" drives internal, but you would need to add more ports through a SATA/SAS HBA or RAID card.
    2 points
  25. 8TB drives in service here ... ... as a BIOS machine the 1st (boot) disk must be <2TB ... ... but declaring other disks with GPT partitioning ... ... the sky is the limit.
    2 points
  26. Hellooooo everybody. As time has finally come and all required parts has arrived, I am presenting you my long struggling upgrade of the CPU + Noctua cooler. All is reversible with no permanent destruction. (At least, I hope ). And CPU fan is working according main case FAN. Upgrade is about to have a new CPU: Xeon E3-1270 V2 with noctua fan: NH-L9i, installed together and cpu fan would work according main case fan. (so if case fan spinning fast, cpu fan too. And vice versa). 1, CASE: ------------- At first, the case needed to be modified little bit. I didnt want to drill out the original passive cooler mounting holders so I did a small workaround. I filed a holes for the screw headers on the outside frame (1) and also a longer hole in the ("I dont know reason why is there" ) "mountain" below CPU (2). pic 1: https://ibb.co/FBxC04k pic 2: https://ibb.co/bJdb0gG 2, SCREWS: ------------------ As there was a hole in the screw holder on the motherboard plate, I had to find out the size of the screws, thread, diameter and so on. But in another topic on this forum I was asking for the help with that, and somebody replied that I can use application and screw compared with different sizes. So I did that and found out that the right size of screw is: #6-32. So I had to ordered them (toghether with screw nuts) from China because I couldnt find them in our country stores. 3, BRACKETS: ---------------------- It is known, that Noctua original brackets were longer in size, 75 vs 65 mm. So I had a luck and asked our local CNC machining store if they could make a new brackets for reasonable price. So here we are: 65 mm brackets for Noctua fan. pic 3: https://ibb.co/DVBM8Qt (The new ones are not labeled) 4, COMPLETING: --------------------------- After months of waiting, I've got all necessary parts and installed them together: https://ibb.co/0D2rN5g - screws in the motherboard plate. https://ibb.co/w74Z27B - Noctua fan cooler mounted with screws and nuts. https://ibb.co/RBNMfRX - with fan. https://ibb.co/nC94fs1 - reason for the hole. 5, FAN - Most tricky part ------------------------------------- This was interesting part as we all know that the signal in the case fan is inverted and it is hard to workaround it. But after some hours of googling I found the solution. There is IC chip existing which can invert the signal in the opposite way. The chip is: 74HC04 and you should be able to get it easily. So with this scheme I found, it was really simple to create a new CPU fan cable: https://electronics.stackexchange.com/questions/322390/12v-pwm-fan-seems-to-be-running-inversely-of-what-it-should-be Here is mine: (I also used the mentioned guide with power sourcing from separate molex, another topic on this forum) https://ibb.co/YfKZWh8 - on the left - it is first attempt. I didnt like it much so I created a new one on the right. Much better result. https://ibb.co/CQ5Kw0z - completed setup. I changed the case fan connector to usual one. The cable with chip inside was put into shrinking tube. So with all this, I installed everything together and now the new CPU fan works the exactly same way as the main case fan. The next steps I would like to finish are: (probably more software-ish). As I still have the original BIOS, firmware and all this, I would like to update them. So maybe the question will be put here: what do you recommend to update first ? Or is there any guide how to update all possible things on this server ? in the correct way ? is it deppend on the right order ? Also I realized that I am still using the system SSD disk on the first slot in the bay, so is there any way how to use SSD disk independently and the whole 4 bay slots for the storage disks ? Another thing is that when I run some "stress" test on the server, all 8 cores at 100%, they went to 80 degrees and increasing, but when I checked the iLO, I saw the the fan was spinning only on 18%. After some time on 21%, what I guess is not much. So I need to check the cooling options which are in BIOS, I guess. Also I am not sure if this is because the old BIOS ? Shouldnt be fan spinning more if the CPU has big load ? I think that's all my thoughts, ideas and I hope that maybe it helps somebody. Many thanks again for all your hints and help. Have a great days. cheers
    2 points
  27. Just updated to ESXI 7.0 Update 1. No problems whatsoever. Do note that i'm using a E3-1265L V2 cpu.
    2 points
  28. My Synology is running a bunch of stuff. File services of course Surveilliance cameras A podcast-feed (I'm not podcasting - it was just for fun to see how to make a podcast stream work) Backup of my oneDrive cloud data Workstation backup, and now the Ubiquiti Controller. Has been working like a charm so far. But you're right, if you already have the CloudKey there's no need to migrate away from it. On the contrary. BTW the DS1019+ is one hell of a machine
    2 points
  29. All credit to schoondoggy for his original idea. A shame I am on the wrong side of the Atlantic to order an SDM kit as the import duties make it an expensive option, or would have if they were still available. I had to look around to see what alternatives I could find... and managed to find the bits needed on Amazon! I must admit I am rather pleased with the end result of this project and may repeat it on the other Gen8 MicroServer. The outcome is a server with a mirrored system drive on the internal B120i, and the main drive bays connected to an HP P222 card providing a RAID-10 data volume. This is pretty much my ideal setup for a stable resilient but small server. Sabrent 3.5" to x2 SSD Internal Mounting Kit (BK-HDCC) Amazon UK - https://amzn.to/32qEVou Amazon US - https://amzn.to/3h8Blom The kit comes with all the cabling you need and all the required screws to mount the drives and bracket. Two types of power splitter cables are included along with two SATA cables. The bracket holds two SSDs and provides a small gap between them. I mounted the bracket next to the PSU but leaving a 2-3mm gap to allow for some airflow past the SSDs in case they get warmed by the P222. Tips: Put some insulating tape along the underneath of the upper chassis rail next to the PSU and drilled three small holes. I used two strips on to stop the drill bit from walking across the metal. Drill with a 2mm first then a 2.5mm bit. This leaves enough metal for screws to thread in. Check and measure each hole against the bracket as you go so that they line up. Bracket is then fixed in place using the provided screws (see photo). The final part of the puzzle is another Mini-SAS to SATA cable the same as the one connected to the internal drive bays. This is an SFF-8087 Mini-SAS [male] connector to 4 SATA [female] header cables. The 50cm cable from Jyopto works great and not too much spare to lose within the system. Amazon UK - https://amzn.to/2CXAzgb I could not find it on Amazon US site but there was one from CableCreations that same length. Just need to add your choice of SSDs and cable up to B120i port. I went with Samsung 860 Pro SSDs for the longer rated lifespan of write cycles. I decided to forget about an internal DVD/RW drive and opted for an external HP F2B56AA slimline drive that can be plugged into the USB of either system. Amazon UK - https://amzn.to/2EgSSxD Amazon US - https://amzn.to/326uFmL Sabrent also make a really good quality 2.5" to 3.5" bay converter adapter (BK-PCBS) that I have used in the EX490/X510 to convert from HDD to SSD system drives. Amazon UK - https://amzn.to/3jbYBmT Amazon US - https://amzn.to/335kXQz Hopefully this information will prove useful to some looking to update their MicroServer.
    2 points
  30. MS Gen8 only supports 8GB UDIMMS
    2 points
  31. Lycom DT-130 has arrived, wish me luck.
    2 points
  32. Ok, I've just completed some testing and for the benefit of others will share my findings: E2246G 64GB RAM 2 x Samsung SATA SSDs (500GB & 1TB) 2 x WD 6TB SATA HDDs HP TPM iLO Enablement HP NC365T (Quad-Port NIC) Whilst running 14 VMs, running the install of another VM, Robocopy to a NAS and viewing the video feed from 4 HD cameras hosted on one of the VMs the highest I have seen the server hit is 122.5w Under my 'normal' circumstances of 7 VMs running, the highest its hitting is 79.7w, but is generally hovering around the 70w mark. With this in mind I'm going to go ahead and order the Lycom DT-130 to install two additional NVMes (in lieu of the NC365T).
    2 points
  33. Ah. Had this same issue with WS202 R2. It is caused by updating the .NET version from the default 4.5 version that installs with the OS. I was able to simply rollback the server to the original version and not update it to latest future versions like 4.8, etc and everything is fine. Unfortunately the Dashboard simply isn’t compatible with the newer versions of .NET. The result is crashing Dashboard and hung up statuses as you’ve seen.
    2 points
  34. Where else can you compliment someone on their "nice rack" these days
    2 points
  35. Yes, for me the RAM was the limiting factor as well. I moved onto a DL360 Gen8. Issue with the MS is even if you can fit a mini-ITX board into the case with another low-profile CPU fan, you'll have to do some heavy modding to the I/O area as it's not removable. I tried to do something similar with the HP Z820 workstation because the case is really quite nice. Unfortunately the amount of modding required was beyond my skill and the effort didn't seem to be worth it.
    2 points
  36. TDP of 69W is very high, even with the 65W replacement cooler from HP that costs a fortune, so I personally wouldn't use that CPU.
    2 points
  37. Sorted it. I created a UEFI boot USB and it worked.
    2 points
  38. Had some time on my hands and I’ve been running ESXi since I first got the microserver and upgrading until 6.5 update 2 with all the faff from downgrading drivers. I saw a path to migrate all my guests to hyper-v with Microsoft VM migration tool. So long story short, HP b120i raid driver with windows server 2019 hyper-v image works. Was a bit concerned as only listed for Server 2016. Installed openssh packages and windows admin center as I don’t use enterprise/pro windows at home. Even put on the windows subsystem for Linux so from command line I can be in a more familiar shell. Not a powershell user for the most part. I shutdown all guests and copied them off somewhere safe. Biggest VM disk was 1TB. Used pigz to speed up compression when I tar them all up, the 1TB was actually mostly allocation, Not data, which shrinks massively when compressed. linux subsystem was very handy to move back and expand after wiped the disks with windows install. All images needed connecting to new virtual router and reverted to DHCP on NIC and had to be readdressed where I had statics. Windows Admin Center allowed RDP to consoles for all to do this. I have one pc with windows home and edge on it. Had a mix of freebsd, linux and one windows image. Amazingly pain free, faster and solid. Only painful part was the time it took to convert all the virtual disks. I kind of wish I had done this sooner but I think the advent of windows admin center is what made it possible as I can build a guest over web interface just like ESXi from anywhere. I don’t do any hardware passthru so can’t speak to this or complicated guests. If like me your wondering if dumping ESXi for something else because you used the b120i softraid and are too cheap to buy a raid card to replace it I think this is the least worst solution. I was going to try proxmox if I failed miserably but it worked out fine. Found this guide helpful https://www.nakivo.com/blog/how-to-convert-vmware-vm-to-hyper-v/
    2 points
  39. as long as it doesn't block airflow in the case you will be fine. I've attached SSD's with double sided tape to inside and lived
    2 points
  40. It does bring many things in to question, I have gone cheap on some things, others not so much. my DVD/BR Rips that make up probably 90-95% of the data usage I do not want to have to replace but I can consequently I have a RAID 6 array, the critical data is on RAID1 as for losing a RAID on rebuild, its an old fear mongering article but it gets the point across is this one below. it explains the math behind what happens https://www.zdnet.com/article/why-raid-5-stops-working-in-2009/ Obviously it did not/does not stop working but the efficacy of it as data protection becomes a greater and greater issue as storage devices increase in size
    2 points
  41. Bump this topic:) So I upgrade my microserver 8 to xeon 1270v2, and decided to install active cooling I try to won on ebay Scythe Kozuti, but oops. And then read this topic and bought Noctua NH-L9i. But things about to install it on cable tire not good. So I found a guy with milling machine, and he made a "short" legs for L9i Original nocuta upside, milled - below Cooler perfectly fit on board using Noutua screws with additional spacers from tray side (hello HPE with non-standart 1155 dimensions). And don't forgot remove nuts for original heatsink from motherboard tray. Cooler connected to case fan connector with Y-splitter.
    2 points
  42. Yeah lets be fair, you say to their credit, if they hadn't been caught out then we'd all still be in the dark, they didn't do this through choice, someone somewhere at WD made the call not to inform the customer.
    2 points
  43. They didn't say anything about resilvering... which puts any drive that's being added to an already utilized RAID volume.into a relentless write task that consequently puts a strain on any SMR drive. It doesn't matter if it's in a datacenter or a home NAS scenario.... resilvering works the same way and puts a lot of stress to the drives in the volume. This is classic bait 'n switch, WD. Not good.
    2 points
  44. Red NAS Drives SMR versus CMR I've investigated this a bit further and came up with the following conclusions. Western Digital is not transparent with any of this information - so this is based just on what I found 1. The WDx0EFRX drives appear to be the older model - I purchased WD Reds in 2013 and they match the 2013 datasheet. Ditto for some 3TB Reds I bought in 2016. As recently as the 2018 datasheet, WD listed WD40EFRX drives in their NAS datasheet. However, this was the first appearance of the WDx0EFAX drives in 10 and 12 gig sizes 2. Their latest datasheet, published in December 2019, lists both WDx0EFRX and WDxEFAX models for Reds. Interesting differences in Cache and speed listed between the two without explanation. 3. Amazon and others still have WDx0EFRX and WDx0EFAX drives listed separately, I purchased a "spare" WD Red over the weekend - it arrived today and is a WDe0EFRX model. 4. Qnap has a hardware compatibility list - My NAS, QNAP TS451, does not list WDx0EFAX as a compatible drive. It does have WDx0ERX spelled out. 5. On the Synology compatiblity list - the WD60EFAX and the WD20EFAX are listed as SMR Drives The following is not verfied - but was mentioned in the QNAP and Synology Forums. The WDx0EFAX drives may have been modified thru cache to give SMR drives better compatiblity with RAID. here is a link to the datasheets I've found https://drive.google.com/drive/folders/1EcjO5Pih7BilAshWhYcxbG6pFTwWWAOj?usp=sharing
    2 points
  45. The spec sheet for the Dell XPS 8700 shows that the SATA ports are SATA 3.0. https://downloads.dell.com/manuals/all-products/esuprt_desktop/esuprt_xps_desktop/xps-8700_reference guide_en-us.pdf The 860 EVO is a very good drive. You may need a 2.5" to 3.5" drive adapter to mount it. https://downloads.dell.com/manuals/all-products/esuprt_desktop/esuprt_xps_desktop/xps-8700_owner's manual_en-us.pdf
    2 points
  46. https://www.woot.com/plus/microsoft-surface-books-surface-pro-4-tablets?ref=w_cnt_gw_dly_wobtn
    2 points
  47. iLO 2.73 released https://support.hpe.com/hpsc/swd/public/detail?swItemId=MTX_ba3437a6c8d843f39ab5cace06 UPGRADE REQUIREMENTS: OPTIONAL ***ATTENTION*** Note for ESXi users: If you are booted from the Embedded SD Card, it is strongly recommended that you reboot the server immediately after updating the iLO firmware. FIRMWARE DEPENDENCY: Hewlett Packard Enterprise recommends the following or greater versions of iLO utilities for best performance: - RESTful Interface Tool (iLOREST) 2.3 - HPQLOCFG v5.2 - Lights-Out XML Scripting Sample bundle 5.10.0 - HPONCFG Windows 5.3.0 - HPONCFG Linux 5.4.0 - LOCFG v5.10.0 - HPLOMIG 5.2.0 KNOWN ISSUES: - Fibre Channel Ports are displayed with degraded status if they are configured but not attached. FIXES: The following issues are resolved in this version: - Added fix for Embedded Remote Support in an IPv6-only environment. - Added fix for Embedded Remote Support data collection for systems with multiple Smart Array Controllers. Enhancements: - Suppress SNMP traps for NIC link up/link down events that occur during POST.
    2 points
  48. In theory It should work with a SATA m.2 disk yes as it uses a SATA controller what I assume @schoondoggy was referring to is a HBA adapter that allows for actual SATA disks to be added
    2 points
  49. I had the same problem. The solution that worked for me was to change the Power Regular Settings to "OS Control Mode" in ILO. Hope this helps.
    2 points
  50. For the last 10 years or so, Ive been using Onenote to help manage these things. Ill download the manual, info etc. Scan receipts into the folder I have a separate page for each appliance Since its on onedrive, its available where ever i go With regards to extended warranties, i think of them as very limited insurance policies against the loss of the object. In the great majority of cases, its not a good deal. They offer it to you to make money Beware of confirmation bias. The handful of times it comes in handy are far more memorable than the majority of times it was a waste of money
    2 points
This leaderboard is set to Indiana - Indianapolis/GMT-04:00
×
×
  • Create New...