Jump to content
RESET Forums (homeservershow.com)
oj88

Windows Server 2016 Essentials

Recommended Posts

nrf

so this time around I loaded up the 2016 'standard' server and turned on the essentials feature. client join with skip domain join was quick and easy. and the 'shared folders' share is accessible from the launchpad even as a 'standard' user. hope that helps.

Share this post


Link to post
Share on other sites
asdaHP

so this time around I loaded up the 2016 'standard' server and turned on the essentials feature. client join with skip domain join was quick and easy. and the 'shared folders' share is accessible from the launchpad even as a 'standard' user. hope that helps.

Hi,

I dont think it is a standard vs essentials server issue since my very server is a standard with essentials experience.

 

But i think i found the issue. Did u also skip DNS (HKLM to true from false) also like tinkertry said or only skip domain? I found that the DNS makes the difference of access from me. I detail that below in my reply to chris.

thanks

Share this post


Link to post
Share on other sites
asdaHP

This. Emphatically. 

 

 

And yeah, Server 2016 is the Windows 10 version of Server.  It's going to look a feel much like Windows 10, including no shitty "start menu" thing. 

 

 

 

As for the DFS Namespace stuff (the "Shared Folders"), here is what it looks like on 2012R2 Essentials: 

 

 

Thanks chris for taking the time capturing and posting the settings. On a related issue I may have solved the issue of shared folders :) . i believe it has to do with DNS settings. I had setup a win 10 home and connected without skipping anything and i could access shared folders. I mentioned that earlier. Struck me yday that since home doesnt connect to domain, the issue must be something else. So i changed via registry to true for DNS server discovery and rebooted and voila, could not access shared folders. So reversed that step and it worked. Just now did it on one of my pro computers that skipped domain and changed server discovery to false and rebooted and shared folders works for standard and admin users!! So it is just that step that makes the difference.

 

I dont know the technicals why that happens. I wonder if i can add a second dns below it ex 8.8.8.8 for the days that my server is off or not working?

 

 

Unrelated question: My server since two days claims "one or more predefined folders are missing". These include the users, file redirection, photo, video, company folders. I have duplicate folders from my previous installation of server 2012 r2 evaluation version and included the same OS and the same drivepool drives. I am wondering:

1. Can i safely delete those older folders from the OS C: drive? The new folders are called users2 in file explorer but come up as users in launchpad and shared folders.

2. Is this issue of missing folders which only started two days back (server 16 has been on for 10 days) an issue because i am pointing the shared folders to the drivepool drive F. Should i instead point them to the underlying physical drive eg drive D or E?

 

thanks again

server error duplicate or olde folders.jpg

 

PS For some reason my jpg are uploading in a small size format vs the size i captured them. Il try reposting in drivepool forum later.

server error missing folders.jpg

Edited by asdaHP

Share this post


Link to post
Share on other sites
Drashna Jaelre

Ah, so it's DNS related.

 

Makes sense.  If you could, open up the "DFS Namespace" management console (should be in Admin tools) and see if it uses "\\SERVER\Share" or "\\SERVER.DOMAIN.local\Share".   

if it's using the second, then that is the issue entirely. 

 

 

And as for the DNS, yes, you can specify 2-3 DNS servers....  Though, for simplicity, I would recommend setting this on the router level.  Normally, you should have DHCP options you can set on the router.  Set the primary DNS server to your Essentials server, and set the second DNS server to your ISP, OpenDNS or Google's DNS.   

 

I HIGHLY recommend this configuration, because then the auto-configure of the DNS does not EVER happen on local systems. 

And this way, it will check for the server first, and then try checking the other, if that fails, it falls back to the other DNS Server.

And having the local DNS server also means that lookup time gets faster after a while, because the Windows DNS caches all of the lookups.

 

 

 

As for the older folders, I would check to make sure they're not empty first, but yeah, you should be able to delete/move them without any issues. 

  • Like 1

Share this post


Link to post
Share on other sites
asdaHP

Ah, so it's DNS related.

 

Makes sense.  If you could, open up the "DFS Namespace" management console (should be in Admin tools) and see if it uses "\\SERVER\Share" or "\\SERVER.DOMAIN.local\Share".   

 

if it's using the second, then that is the issue entirely. 

 

Thank you!! You are absolutely right, it does use nameserver.namedomain.local\shared folders.

 

So given that, should i try renaming it or is this something i should not touch? See below but i would rather not have the server be primary dns on my computers (or is that what u are recommending?...sorry this is very foreign to me)

 

 

 

And as for the DNS, yes, you can specify 2-3 DNS servers....  Though, for simplicity, I would recommend setting this on the router level.  Normally, you should have DHCP options you can set on the router.  Set the primary DNS server to your Essentials server, and set the second DNS server to your ISP, OpenDNS or Google's DNS.   

 

I HIGHLY recommend this configuration, because then the auto-configure of the DNS does not EVER happen on local systems. 

And this way, it will check for the server first, and then try checking the other, if that fails, it falls back to the other DNS Server.

And having the local DNS server also means that lookup time gets faster after a while, because the Windows DNS caches all of the lookups.

Can you expand on what u said. I dont much understand dns/dhcp stuff (router is running tomato usb on an older asur rtn16). I see on logging into the router it has "use internal dns" checked. It seems that one point i had set the dhcp between 192.168.1.100 to 150 to avoid conflicts (router is 192.168.1.1). I have options set for dynamic dns1 as "use WAN IP address xy.zzz.zzz.zzz". There is a dynamic dns2 and 3 which are currently empty. Am i to change that?. So in other words i should change the dynamic DNS1 to the server IP and then the others to google etc? Sorry for these bunch of DNS questions. I thought i was feeling great since i had found the problem :-) And lastly after this should i again switch the client computers registry back (SkipDNSServerDetection to True)?

thanks again

Share this post


Link to post
Share on other sites
Drashna Jaelre

Thank you!! You are absolutely right, it does use nameserver.namedomain.local\shared folders.

 

So given that, should i try renaming it or is this something i should not touch? See below but i would rather not have the server be primary dns on my computers (or is that what u are recommending?...sorry this is very foreign to me)

 

 

Ah, so that's the issue exactly, the name resolution is failing outright here... and that's why it's causing the access issue. 

 

You can change it, but I'd recommend just setting up the DNS stuff "properly", and not worrying about it. 

Especially as you'd have to re-do this every time you added a share (at least for that new share). 

 

 

 

Can you expand on what u said. I dont much understand dns/dhcp stuff (router is running tomato usb on an older asur rtn16). I see on logging into the router it has "use internal dns" checked. It seems that one point i had set the dhcp between 192.168.1.100 to 150 to avoid conflicts (router is 192.168.1.1). I have options set for dynamic dns1 as "use WAN IP address xy.zzz.zzz.zzz". There is a dynamic dns2 and 3 which are currently empty. Am i to change that?. So in other words i should change the dynamic DNS1 to the server IP and then the others to google etc? Sorry for these bunch of DNS questions. I thought i was feeling great since i had found the problem :-) 

 

Since I'm not familar with Tomato's UI... 

 

If the LAN IP address info has the space for DNS settings, set "DNS1" to whatever your server's IP address is (for instance "192.168.1.10" ). And then set DNS2 to "8.8.8.8" or whatever. 

 

 

 

Ah, here we go: 

http://tomato.wikia.com/wiki/OpenDNS_and_Tomato

 

Both in "static DNS" 

For the first line, enter your server's IP address.

For the second line, enter 8.8.8.8, your ISP's DNS Servers or whatever. 

 

And lastly after this should i again switch the client computers registry back (SkipDNSServerDetection to True)?
thanks again
 
After doing this, it shouldn't matter.
 
I've never messed with this setting because I do have my network setup "correctly" to handle a domain.  The Connector software should see the proper config and just "not do anything".  

Share this post


Link to post
Share on other sites
asdaHP

 

Ah, so that's the issue exactly, the name resolution is failing outright here... and that's why it's causing the access issue. 

 

You can change it, but I'd recommend just setting up the DNS stuff "properly", and not worrying about it. 

Especially as you'd have to re-do this every time you added a share (at least for that new share). 

 

http://tomato.wikia.com/wiki/OpenDNS_and_Tomato

 

Both in "static DNS" 

For the first line, enter your server's IP address.

For the second line, enter 8.8.8.8, your ISP's DNS Servers or whatever. 

 

 

Thanks!! Will do that. I also made by server IP static first.

 

 
After doing this, it shouldn't matter.
 
I've never messed with this setting because I do have my network setup "correctly" to handle a domain.  The Connector software should see the proper config and just "not do anything".  

 

OK. So il change the registry status to how the connector 'usually' connects ie without the tinkertry hack. I was just afraid that between the router setting dns now and what the connector does it would be messy but it sounds like not the case. Will fill in here once ive done those changes.

 

 

 

On this earlier question of my server recently stating that "one or more predefined folders are missing":

2. Is this issue of missing folders which only started two days back (server 16 has been on for 10 days) an issue because i am pointing the shared folders to the drivepool drive F? Should i instead point them to the underlying physical drive eg drive D or E?

 

And does 'recreating folders' which is what essentials dashboard is telling me to solve the missing folders going to destroy all my saved folders in the corresponding user tabs? Not sure what recreating does.

 

thanks for the umpteenth time :-)

Edited by asdaHP

Share this post


Link to post
Share on other sites
asdaHP

A couple of follow ups

:

1. My browser pages are loading reallllll slowly now. Not sure if this is from the DNS in the client pointing to the server IP or a conflict between router having the server IP listed as static AND the client pointing DNS to the same server again. Without the skip DNS trick, the connector adds the servers IP in the primary DNS (which i can see under ethernet properties for IPv4).

 

2. That unrelated issue of missing predefined folders on the server just resolved itself!! Not sure how or why but I am happy

Share this post


Link to post
Share on other sites
nrf

as I am 'engineering' my new setup, some questions arise:

 

  • avoiding bitrot - currently I use a drive scanning product that is talked about in these forums. the idea is by checking the disk from time to time soft error sectors can be re-written and possibly remapped before the issue damages your file. in '16 there is 'reFS', but what I read says that in the case of damage, the file gets restored using a duplicate copy made possible through a different feature, 'storage spaces'. given the bad press I have seen on the performance of SS and oddities in different recovery scenarios, isn't the drive scanner the 'simpler is better' solution?
  • use of SSD -vs- rust drives - I have only a 250gb ssd, and have been putting c: on it. is that the best use here? I am not doing heavy hosting or anything that I think 'demands' ssd performance
  • use of RAID - whether it be hardware or software RAID, to what extent do I need to go for quick recovery of a disk failure? I have never put my C: in raid, but maybe should think about it, likewise client backup data and version history are also saved twice per day to server backup, so is RAID recommended there? in some regards I treat RAID as a negative...in the event of a crash requiring rebuild of a RAID volume, today's large disks can take days to rebuild a RAID 1 set.

 

so I would welcome any wisdom from the forum members, I suspect issues like this run through others' minds when they are setting up 2016 instances...

 

thanks in advance!

Edited by nrf

Share this post


Link to post
Share on other sites
Drashna Jaelre

  • Well, Server 2012 features ReFS, and is the first OS to do so.  That said, 2012R2 supports the self healing feature on parity drives, not just mirrored drives. 

    As for the self healing f eature, IIRC, this only happens when the corruption is detected. The file system and Storage Spaces then attempts to automatically correct the data, is possible.  However, this doesn't always succeed. 

     

    But yes, running StableBit Scanner may effectively combat the same problem.  All modern drives include ECC bits on the "spinning rust", so to speak.  That hardware ECC recovered and other SMART values? That's what it's referring to.  When you read the data, it checks the ECC block, and can correct the data to some degree.  This is handled entirely invisibility to the OS or system in general and is all done in firmware. 

     

    StableBit Scaner's benefit is that it frequently reads the data, so that the drive may detect and correct these errors before they become uncorrectable.  And once they do... the software *can* attempt to read the data back.

     

    ReFS is basically just another level of integrity checking.   Though there are some other benefits, such as "copy on write", that significantly lower the chances of data corruption (IIRC, NTFS just modifies existing data when writing to it, but "COW" writes out a new copy of the file, with the modifications and then un-allocates the old file. So if something happens, you still have the 100% intact old file, where as NTFS may have actually corrupted the file.  This is exceedingly rare, though. 

     

    The point where Storage Spaces gets a bad rap is well, two fold.  

    Specifically, when there is a failure, it tends to be catastrophic, and there are few tools to help you out here, and even less documentation from Microsoft on how to deal with the issue.  There are plenty of disaster stories out there about the entire "pool" failing because of an issue with a single disk, or it not allowing you to remove a disk, etc.  You're much better off using hardware RAID, IMO.    

    The second is performance.  Mirrored is fine, but parity takes a pretty significant hit in performance.  And coupled with ReFS, there is an additional (albeit, slight) performance hit.   it makes a parity Storage Spaces pool pretty much horrible for anything but storage. 

     

  • For a system drive, yes, SSD is the way to go. Especially when dealing with domain controllers. 

    The system drive has a lot of I/O that occurs on it. meaning that it's always busy. The high IOPS that SSDs can handle make them ideal for this. 

     

    Coupled with the fact that domain controllers disable write caching on the "system" drive (well, on the drive that the "sysvol" and "netlogon" folders are stored on... which is the system disk by default).   This makes spinning drives incredibly slow for Essentials...  So a SSD will significantly boost system performance. 

     

    And I can empirically verify this, as my Essentials server is sitting on spinning rust right now, due to a firmware bug in my old SSD. 

     

  • RAID is redundancy, and not backup. It's ideal for uptime, as it can mitigate hardware failure. 

    And you're right, rebuild times for large arrays can take days, or even weeks.    

     

    Solutions such as ZFS can help mitigate that, as they're a bit better about handling this, but even still.

     

    That's a large part of what drew me to StableBit DrivePool in the first place. There really isn't a "rebuild".  Data is still accessible the entire time, and performance impact is minimal.  That said, the reduplication process is very much akin to rebuilding.... and can take 6+ hours per TB of data to reduplicate... longer for lots of small files (beacuse the duplication process runs at a background I/O Priority to not intefere with system performance, and it's still "just" copying files to new drives). 

     

    That said, if you need 99.99% uptime for your server (much like I do), having your system disk in a mirrored RAID array isn't a bad idea.  It can boost (read) performance, and help ensure uptime.  Even better is that modern Intel RST chipsets do actually support TRIM commands being sent to the underlying SSDs, so there isn't as much "burn out" of the drives.  

     

    The other place that RAID really thrives is performance.  A striped (RAID 0) array gets signfiicantly faster for each additional disk you throw in it.  This can be incredibly important for I/O intensive tasks, such as video encoding.  Though, arguably, you may be better off with NVMe drives, at this point (with 1GB/s speeds 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...