Tuesday, 20 January 2015

10 Core Concepts that Every Windows Network Admin Must Know

The 10 core concepts that every Windows network admin must know. These are the things that you not only need to know in your day to day job as a Windows Network Admin but for anyone who is interviewing as a network admin.

 

Introduction

Recently a relative of mine went for a job interview as a security analyst. She was asked a number of technical questions in the interview but the ones that she struggles with the most were the networking questions (as she had not used or studies networking in some time). I thought that this article might be helpful for Windows Network Admins out there who need some "brush-up tips" as well as those who are interviewing for network admins jobs to come up with a list of 10 networking concepts that every network admin should know.
So, here is my list of 10 core networking concepts that every Windows Network Admin (or those interviewing for a job as one) must know:

 

1.     DNS Lookup

The domain naming system (DNS) is a cornerstone of every network infrastructure. DNS maps IP addresses to names and names to IP addresses (forward and reverse respectively). Thus, when you go to a web-page like www.windowsnetworking.com, without DNS, that name would not be resolved to an IP address and you would not see the web page. Thus, if DNS is not working "nothing is working" for the end users.
DNS server IP addresses are either manually configured or received via DHCP. If you do an IPCONFIG /ALL in windows, you will see your PC's DNS server IP addresses.

 Figure 1:  DNS Servers shown in IPCONFIG output
So, you should know what DNS is, how important it is, and how DNS servers must be configured and/or DNS servers must be working for "almost  anything" to work.
When you perform a ping, you can easily see that the domain name is resolved to an IP (shown in Figure 2).
Figure 2: DNS name resolved to an IP address

2.     Ethernet & ARP

Ethernet is the protocol for your local area network (LAN). You have Ethernet network interface cards (NIC) connected to Ethernet cables, running to Ethernet switches which connect everything together. Without a "link light" on the NIC and the switch, nothing is going to work.
MAC addresses (or Physical addresses) are unique strings that identify Ethernet devices. ARP (address resolution protocol) is the protocol that maps Ethernet MAC addresses to IP addresses. When you go to open a web page and get a successful DNS lookup, you know the IP address. Your computer will then perform an ARP request on the network to find out what computer (identified by their Ethernet MAC address, shown in Figure 1 as the Physical address) has that IP address.

 

3.     IP Addressing and Subnetting

Every computer on a network must have a unique Layer 3 address called an IP address. IP addresses are 4 numbers separated by 3 periods like 1.1.1.1.
Most computers receive their IP address, subnet mask, default gateway, and DNS servers from a DHCP server. Of course, to receive that information, your computer must first have network connectivity (a link light on the NIC and switch) and must be configured for DHCP.
You can see my computer's IP address in Figure 1 where it says IPv4 Address 10.0.1.107. You can also see that I received it via DHCP where it says DHCP Enabled YES.
Larger blocks of IP addresses are broken down into smaller blocks of IP addresses and this is called IP subnetting. I am not going to go into how to do it and you do not need to know how to do it from memory either (unless you are sitting for a certification exam) because you can use an IP subnet calculator, downloaded from the Internet, for free.

 

4.     Default Gateway

The default gateway, shown in Figure 3 as 10.0.1.1, is where your computer goes to talk to another computer that is not on your local LAN network. That default gateway is your local router. A default gateway address is not required but if it is not present you would not be able to talk to computers outside your network (unless you are using a proxy server)


 Figure 3: Network Connection Details

 

5.     NAT and Private IP Addressing

Today, almost every local LAN network is using Private IP addressing (based on RFC1918) and then translating those private IPs to public IPs with NAT (network address translation). The private IP addresses always start with 192.168.x.x or 172.16-31.x.x or 10.x.x.x (those are the blocks of private IPs defined in RFC1918).
In Figure 2, you can see that we are using private IP addresses because the IP starts with "10". It is my integrated router/wireless/firewall/switch device that is performing NAT and translating my private IP to my public Internet IP that my router was assigned from my ISP.

 

6.     Firewalls

Protecting your network from malicious attackers are firewalls. You have software firewalls on your Windows PC or server and you have hardware firewalls inside your router or dedicated appliances. You can think of firewalls as traffic cops that only allow certain types of traffic in that should be in.

 

7.     LAN vs WAN

Your local area network (LAN) is usually contained within your building. It may or may not be just one IP subnet. Your LAN is connected by Ethernet switches and you do not need a router for the LAN to function. So, remember, your LAN is "local".
Your wide area network (WAN) is a "big network" that your LAN is attached to. The Internet is a humongous global WAN. However, most large companies have their own private WAN. WANs span multiple cities, states, countries, and continents. WANs are connected by routers.

 

8.     Routers

Routers route traffic between different IP subnets. Router work at Layer 3 of the OSI model. Typically, routers route traffic from the LAN to the WAN but, in larger enterprises or campus environments, routers route traffic between multiple IP subnets on the same large LAN.
On small home networks, you can have an integrated router that also offers firewall, multi-port switch, and wireless access point.

 

9.     Switches

Switches work at layer 2 of the OSI model and connect all the devices on the LAN. Switches switch frames based on the destination MAC address for that frame. Switches come in all sizes from small home integrated router/switch/firewall/wireless devices, all the way to very large Cisco Catalyst 6500 series switches.

 

10. OSI Model encapsulation

One of the core networking concepts is the OSI Model. This is a theoretical model that defines how the various networking protocols, which work at different layers of the model, work together to accomplish communication across a network (like the Internet).
Unlike most of the other concepts above, the OSI model isn't something that network admins use every day. The OSI model is for those seeking certifications like the Cisco CCNA or when taking some of the Microsoft networking certification tests. OR, if you have an over-zealous interviewer who really wants to quiz you.
To fulfill those wanting to quiz you, here is the OSI model:
  • Application - layer 7 - any application using the network, examples include FTP and your web browser
  • Presentation - layer 6 - how the data sent is presented, examples include JPG graphics, ASCII, and XML
  • Session - layer 5 - for applications that keep track of sessions, examples are applications that use Remote Procedure Calls (RPC) like SQL and Exchange
  • Transport - layer 4 -provides reliable communication over the network to make sure that your data actually "gets there" with TCP being the most common transport layer protocol
  • Network - layer 3 -takes care of addressing on the network that helps to route the packets with IP being the most common network layer protocol. Routers function at Layer 3.
  • Data Link - layer 2 -transfers frames over the network using protocols like Ethernet and PPP. Switches function at layer 2.
  • Physical - layer 1 -controls the actual electrical signals sent over the network and includes cables, hubs, and actual network links.
At this point, let me stop degrading the value of the OSI model because, even though it is theoretical, it is critical that network admins understand and be able to visualize how every piece of data on the network travels down, then back up this model. And how, at every layer of the OSI model, all the data from the layer above is encapsulated by the layer below with the additional data from that layer. And, in reverse, as the data travels back up the layer, the data is de-encapsulated.

Friday, 16 January 2015

Understanding Recycle Bin in Active Directory





Accidental deletion of Active Directory objects is a common occurrence for users of Active Directory Domain Services (AD DS) and Active Directory Lightweight Directory Services (AD LDS). In past versions of Windows Server, prior to Windows Server 2008 R2, one could recover accidentally deleted objects in Active Directory, but the solutions had their drawbacks.

In Windows Server 2008, you could use the Windows Server Backup feature and ntdsutil authoritative restore command to mark objects as authoritative to ensure that the restored data was replicated throughout the domain. The drawback to the authoritative restore solution was that it had to be performed in Directory Services Restore Mode (DSRM). During DSRM, the domain controller being restored had to remain offline. Therefore, it was not able to service client requests.

In Windows Server 2003 Active Directory and Windows Server 2008 AD DS, you could recover deleted Active Directory objects through tombstone reanimation. However, reanimated objects' link-valued attributes (for example, group memberships of user accounts) that were physically removed and non-link-valued attributes that were cleared were not recovered. Therefore, administrators could not rely on tombstone reanimation as the ultimate solution to accidental deletion of objects.

Active Directory Recycle Bin, starting in Windows Server 2008 R2, builds on the existing tombstone reanimation infrastructure and enhances your ability to preserve and recover accidentally deleted Active Directory objects.

When you enable Active Directory Recycle Bin, all link-valued and non-link-valued attributes of the deleted Active Directory objects are preserved and the objects are restored in their entirety to the same consistent logical state that they were in immediately before deletion. For example, restored user accounts automatically regain all group memberships and corresponding access rights that they had immediately before deletion, within and across domains. Active Directory Recycle Bin works for both AD DS and AD LDS environments.

What’s new?  

 In Windows Server 2012, the Active Directory Recycle Bin feature has been enhanced with a new graphical user interface for users to manage and restore deleted objects. Users can now visually locate a list of deleted objects and restore them to their original or desired locations.
If you plan to enable Active Directory Recycle Bin in Windows Server 2012, consider the following:
  • By default, Active Directory Recycle Bin is disabled. To enable it, you must first raise the forest functional level of your AD DS or AD LDS environment to Windows Server 2008 R2 or higher. This in turn requires that all domain controllers in the forest or all servers that host instances of AD LDS configuration sets be running Windows Server 2008 R2 or higher.
  • The process of enabling Active Directory Recycle Bin is irreversible. After you enable Active Directory Recycle Bin in your environment, you cannot disable it.
  • To manage the Recycle Bin feature through a user interface, you must install the version of Active Directory Administrative Center in Windows Server 2012


The Active Directory Recycle Bin feature introduced with Windows Server® 2008 R2 provided an architecture permitting complete object recovery. Scenarios that require object recovery by using the Active Directory Recycle Bin are typically high-priority, such as recovery from accidental deletions, for example, resulting in failed logons or work stoppages. But the absence of a rich, graphical user interface complicated its usage and slowed recovery.
To address this challenge, Windows Server 2012 AD DS has a user interface for the Active Directory Recycle Bin that provides the following advantages:
  • Simplifies object recovery through the inclusion of a Deleted Objects node in the Active Directory Administrative Center (ADAC)

    • Deleted objects can now be recovered within the graphical user interface
  • Reduces recovery-time by providing a discoverable, consistent view of deleted object
Requirements
  • Recycle Bin requirements must be met:

    • Windows Server 2008 R2 forest functional level
    • Recycle Bin optional-feature must be enabled
  • Windows Server 2012 Active Directory Administrative Center
  • Objects requiring recovery must have been deleted within Deleted Object Lifetime (DOL)
    • By default, DOL is set to 180 days
      
     

    Active Directory Recycle Bin step-by-step


    In the following steps, you will use ADAC to perform the following Active Directory Recycle Bin tasks in Windows Server 2012:

    • Step 1: Raise the forest functional level

    • Step 2: Enable Recycle Bin

    • Step 3: Create test users, group and organizational unit

    • Step 4: Restore deleted objects

    noteNote :
    Membership in the Enterprise Admins group or equivalent permissions is required to perform the following steps.

    Step 1: Raise the forest functional level


    In this step, you will raise the forest functional level. You must first raise the functional level on the target forest to be Windows Server 2008 R2 at a minimum before you enable Active Directory Recycle Bin.

    To raise the functional level on the target forest :

    1. Right click the Windows PowerShell icon, click Run as Administrator and type dsac.exe to open ADAC.

    2. Click Manage, click Add Navigation Nodes and select the appropriate target domain in the Add Navigation Nodes dialog box and then click OK.

    3. Click the target domain in the left navigation pane and in the Tasks pane, click Raise the forest functional level. Select a forest functional level that is at least Windows Server 2008 R2 or higher and then click OK.

    PowerShell Logo Windows PowerShell equivalent commands


    The following Windows PowerShell cmdlet or cmdlets perform the same function as the preceding procedure. Enter each cmdlet on a single line, even though they may appear word-wrapped across several lines here because of formatting constraints.
     
    Set-ADForestMode –Identity contoso.com -ForestMode Windows2008R2Forest –Confirm:$false
    

    For the –Identity argument, specify the fully qualified DNS name.



    Step 2: Enable Recycle Bin


     In this step, you will enable the Recycle Bin to restore deleted objects in AD DS.


    To enable Active Directory Recycle Bin in ADAC on the target domain

    1. Right click the Windows PowerShell icon, click Run as Administrator and type dsac.exe to open ADAC.

    2. Click Manage, click Add Navigation Nodes and select the appropriate target domain in the Add Navigation Nodes dialog box and then click OK.

    3. In the Tasks pane, click Enable Recycle Bin ... in the Tasks pane, click OK on the warning message box, and then click OK to the refresh ADAC message.

    4. Press F5 to refresh ADAC.


    PowerShell Logo Windows PowerShell equivalent commands

    The following Windows PowerShell cmdlet or cmdlets perform the same function as the preceding procedure. Enter each cmdlet on a single line, even though they may appear word-wrapped across several lines here because of formatting constraints.
    Enable-ADOptionalFeature –Identity 'CN=Recycle Bin Feature,CN=Optional Features,CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=contoso,DC=com' –Scope ForestOrConfigurationSet –Target 'contoso.com'
    

    Step 3: Create test users, group and organizational unit


    In the following procedures, you will create two test users. You will then create a test group and add the test users to the group. In addition, you will create an OU.

    To create test users

    1. Right click the Windows PowerShell icon, click Run as Administrator and type dsac.exe to open ADAC.
    2. Click Manage, click Add Navigation Nodes and select the appropriate target domain in the Add Navigation Nodes dialog box and then click OK.
    3. In the Tasks pane, click New and then click User.

      New User 
    4. Enter the following information under Account and then click OK:

      • Full name: test1
      • User SamAccountName logon: test1
      • Password: p@ssword1
      • Confirm password: p@ssword1
    5. Repeat the previous steps to create a second user, test2.



      To create a test group and add users to the group

      1. Right click the Windows PowerShell icon, click Run as Administrator and type dsac.exe to open ADAC.
      2. Click Manage, click Add Navigation Nodes and select the appropriate target domain in the Add Navigation Nodes dialog box and then click OK.
      3. In the Tasks pane, click New and then click Group
        .
      4. Enter the following information under Group and then click OK:
        • Group name: group1

      5. Click group1, and then under the Tasks pane, click Properties.

      6. Click Members, click Add, type test1;test2, and then click OK.


      PowerShell Logo Windows PowerShell equivalent commands

      The following Windows PowerShell cmdlet or cmdlets perform the same function as the preceding procedure. Enter each cmdlet on a single line, even though they may appear word-wrapped across several lines here because of formatting constraints.
       
      Add-ADGroupMember -Identity group1 -Member test1
      

      To create an organizational unit

      1. Right click the Windows PowerShell icon, click Run as Administrator and type dsac.exe to open ADAC.
      2. Click Manage, click Add Navigation Nodes and select the appropriate target domain in the Add Navigation Nodes dialog box and then click OK.
      3. In the Tasks pane, click New and then click Organizational Unit.
      4. Enter the following information under Organizational Unit and then click OK:
        • Name OU1


      PowerShell Logo Windows PowerShell equivalent commands 

      The following Windows PowerShell cmdlet or cmdlets perform the same function as the preceding procedure. Enter each cmdlet on a single line, even though they may appear word-wrapped across several lines here because of formatting constraints.
      1..2 | ForEach-Object {New-ADUser -SamAccountName test$_ -Name "test$_" –Path "DC=fabrikam,DC=com" -AccountPassword (ConvertTo-SecureString -AsPlainText "p@ssword1" -Force) -Enabled $true}
      New-ADGroup -Name "group1" -SamAccountName group1 -GroupCategory Security -GroupScope Global -DisplayName "group1"
      New-ADOrganizationalUnit -Name OU1 -Path "DC=fabrikam,DC=com"
      
      

      Step 4: Restore deleted objects


      In the following procedures, you will restore deleted objects from the Deleted Objects container to their original location and to a different location.

      To restore deleted objects to their original location

      1. Right click the Windows PowerShell icon, click Run as Administrator and type dsac.exe to open ADAC.

      2. Click Manage, click Add Navigation Nodes and select the appropriate target domain in the Add Navigation Nodes dialog box and then click OK.

      3. Select users test1 and test2, click Delete in the Tasks pane and then click Yes to confirm the deletion.

        PowerShell Logo Windows PowerShell equivalent commands

        The following Windows PowerShell cmdlet or cmdlets perform the same function as the preceding procedure. Enter each cmdlet on a single line, even though they may appear word-wrapped across several lines here because of formatting constraints.
         
        Get-ADUser –Filter 'Name –Like "*test*"'|Remove-ADUser -Confirm:$false
         
      4. Navigate to the Deleted Objects container, select test2 and test1 and then click Restore in the Tasks pane.

      5. To confirm the objects were restored to their original location, navigate to the target domain and verify the user accounts are listed.

        noteNote:
        If you navigate to the Properties of the user accounts test1 and test2 and then click Member Of, you will see that their group membership was also restored.

      The following Windows PowerShell cmdlet or cmdlets perform the same function as the preceding procedure. Enter each cmdlet on a single line, even though they may appear word-wrapped across several lines here because of formatting constraints.

      PowerShell Logo Windows PowerShell equivalent commands
       
      Get-ADObject –Filter 'Name –Like "*test*"' –IncludeDeletedObjects | Restore-ADObject
      


      To restore deleted objects to a different location

      1. Right click the Windows PowerShell icon, click Run as Administrator and type dsac.exe to open ADAC.

      2. Click Manage, click Add Navigation Nodes and select the appropriate target domain in the Add Navigation Nodes dialog box and then click OK.

      3. Select users test1 and test2, click Delete in the Tasks pane and then click Yes to confirm the deletion.

      4. Navigate to the Deleted Objects container, select test2 and test1 and then click Restore To in the Tasks pane.

      5. Select OU1 and then click OK.

      6. To confirm the objects were restored to OU1, navigate to the target domain, double click OU1 and verify the user accounts are listed.

      PowerShell Logo Windows PowerShell equivalent commands
      The following Windows PowerShell cmdlet or cmdlets perform the same function as the preceding procedure. Enter each cmdlet on a single line, even though they may appear word-wrapped across several lines here because of formatting constraints.

      Get-ADObject –Filter 'Name –Like "*test*"' –IncludeDeletedObjects | Restore-ADObject –TargetPath "OU=OU1,DC=contoso,DC=com"
      



Windows Server 2012 Interview Questions

1) What is DNS & DNS Records?
A) Domain Naming Services or System: It is used for resolving host names to IPs and IPs to Host names. There are 5 types of DNS records: A, CNAME, NS, MX, and PTR. A records: Address (A) records direct

2) What is Replication ? 
A) Replication means duplicate data..Eg- DC and ADC shares same database..whatever changes are made in DC will automattically be replicated to ADC and vice-versa.

3) What are the Logical / Physical Structures of the AD Environment?
A) Active directory consists of a series of components that constitute both its logical structure and its physical structure.
Logical : Object, OU, Domain, Tree, Forest.
Physical : DC & Site.

4) What is the default ports number in Outlook for POP3 / HTTP / SMTP?
A) Port for POP3-110, SMTP-25, HTTP-80

5) What are some of the command-line tools available for managing a Windows 2003 Server/Active Directory environment?
A) NLTEST, NETDIAG, DCDIAG, GPUPDATE /FOTRCE, REPMON, REPADMIN, PING, NSLOOKUP, DSQUERY, CLUADMIN ETC....

6) What types of Hard Disks are used in Servers ?
A) Mostly we use scsi hardrive in server due few reason to it, high scalability & flexibility in raid array, faster from other type of hdd interface like-sata,ata,ide also it reliable & compatible with older scsi devices

7)What is Active Directory GARBAGE Collection ?
A) Garbage collection is the process of the online De-fragmentation of active directory. It happens every 12 Hours.

8) Explain Patch, Service Pack and Hot Fix
A) Patch: Microsoft Patch contain the updates for the application and improve the performance and clear the bugs
Hot fix: this is also same like patch but the hot fill comes along with the new future and bugs clearance.
Service Pack: Latest service pack contain the host fix and patch and the latest updates
 


Thursday, 15 January 2015

Configuring BitLocker in Windows Server 2012 Environment

Introduction : In this section let's see how BitLocker is configured in Server 2012 and Windows 8


When a laptop is stolen or lost, this enables unauthorized users access to personal and corporate data. Even when a computer is password protected, data can be easily retrieved by removing the hard drive and mounting it on a different computer.
BitLocker is an encryption platform developed by Microsoft that mitigates these type of issues.  This feature was introduced in Windows Vista and allows users to encrypt data natively. BitLocker has improved over years and I will go over these features in this article. Bitlocker can also be used with hardware encryption devices such as TPM – Trusted Platform Module. Most Enterprise level laptops such as Lenovo T410 are built with TPM device. By combining these two technologies, it provides best offline protection against data theft. On consume class computer that doesn’t have a TPM device can untilize a USB drive to work with BitLocker encryption.
BitLocker Requirements
  • TMP version 1.2 or later for TPM system integrity check
  • If TMP is not available , USB drive for startup key
  • NTFS File System
  • System partition (350MB) and OS partition must be separate
Below are new Bitlocker features that’s introduced in Windows Server 2012 and windows 8:

Shared Storage Support
 
BitLocker now allows encryption of Windows Failover Cluster shared volmes.

BitLocker Preprovision
 
 Allows system administrators to deploy Windows 2012 to encrypted state during installation.

Faster Encryption Time
 
Windows Server 2012 introduces the used diskspace encryption feature that encrypts only used diskspace. This leads to much faster enduser experience.

PIN or Password change for users
 
This enables regular users to change PIN or password bitlocker volumes.

Hardware Encrypted Drives Support
 
Windows Server 2012 now supports encrypted hard drives at hardware level.

How BitLocker Works

TPM as known as Trusted Platform Module is a hardware chip that is usually connected to the mainboard. This device allows management of encryption keys. TPM purpose is to store encryption keys that can only be decrypted the encrypted device. This double protection architecture provides high security without complex management. TPM must be in “Owned and turned on” for BitLocker to encrypt a drive. TMP Management console on Windows Server 2012/ Windows 8 allows users to initialize TPM and change states. Additionally you can change the state of TPM, change owner password and reset TPM lockout.

Configuring TPM

You can use the “Prepare the TPM” option under “Actions” to initialize the TPM module. Once it’s initialized, configure TPM ownership password and store the .TPM file in a secure location. Note that you can store TPM ownership password can be stored in AD which uses the “ms-TPM-OwnerInformationForComputer” property.




Disabling TPM

To disable TPM, simply use the “Turn TPM” option under Actions pane un TPM management console. You need the owner password or the owner password file.
To Install BitLocker in Windows Server 2012
  1. Install “BitLocker Drive Encryption” and “Enhanced Storage”
    note you can also use below PowerShell command
    Install-WindowsFeature BitLocker -IncludeAllSubFeature
  2. Now you can turn on BitLocker by going to Control Planel > BitLocker Drive Encryption
  3. Or you can right click on any hard drive and choose “Manage BitLocker”

Features Removed from Windows Server 2012

Cluster.exe
Good old cluster.exe will be replaced by failover cluster powershell cmdlets. Cluster.exe won’t be installed by default but it is an optional component. 32 bits DLL resources are no longer supported also.

XDDM
Hardwire drivers support XDDM has been removed in Windows Server 2012. You may still use the WDDM basic diplay only drive that is included in this OS.

Hyper-V TCP Offload
TCP offload feature for Hyper-V VMs will be removed. Guest OS will not able to use TCP chimney.

Token Rings
Token ring network support is removed in Windows Server 2012, who needs it anyway?

SMB.sys
This file has been removed, now OS uses the WSK, winsock kernel to provide the same service.

NDIS 5.x
NDIS 5.0,5.1 and 5.2 APIs is removed. NDIS 6 is supported.

VM Import/Export
In hyper-v, the import / export concept of transporting VM is replaced by “Register / Unregister” command / method.

SMTP
SMTP and the associated management tools are deprecated. you should begin using System.Net.Smtp. With this API, you will not be able to insert a message into a file for pickup; instead configure Web apps to connect on port 25 to another server using SMTP.


Securing Application Execution with Microsoft AppLocker

Introduction

AppLocker is a new feature available in Windows 7 and Windows Server 2008 R2 that helps to prevent the use of unknown or unwanted applications within a network. Its functionality boasts both security and compliance benefits for a wide array of organizational environments.





AppLocker is a feature in Windows Server 2012, Windows Server 2008 R2, Windows 8, and Windows 7 that advances the functionality of the Software Restriction Policies feature. AppLocker contains new capabilities and extensions that reduce administrative overhead and help administrators control how users can access and use files, such as executable files, scripts, Windows Installer files, and DLLs. By using AppLocker, you can:

  • Define rules based on file attributes that persist across application updates, such as the publisher name (derived from the digital signature), product name, file name, and file version. You can also create rules based on the file path and hash.
  • Assign a rule to a security group or an individual user.
  • Create exceptions to rules. For example, you can create a rule that allows all users to run all Windows binaries except the Registry Editor (Regedit.exe).
  • Use audit-only mode to deploy the policy and understand its impact before enforcing it.
  • Create rules on a staging server, test them, export them to your production environment, and then import them into a Group Policy Object.
  • Simplify creating and managing AppLocker rules by using Windows PowerShell cmdlets for AppLocker.


Software Restriction Policies was originally designed for Windows XP and Windows Server 2003 to help IT professionals limit the number of applications that would require administrator access. With the introduction of User Account Control (UAC) and the emphasis of standard user accounts in Windows Vista®, fewer applications require administrator privileges. As a result, AppLocker was introduced to expand the original goals of Software Restriction Policies (SRP) by allowing IT administrators to create a comprehensive list of applications that should be allowed to run.



Typically, an app consists of multiple components: the installer that is used to install the app and one or more .exe file, .dll file, or script. With classic apps, not all these components always share common attributes such as the publisher name, product name, and product version. Therefore, AppLocker controls each of these components separately through the following rule collections: Exe, Dll, Script, and Windows Installers. In contrast, all the components of a packaged app share the same Publisher name, Package name, and Package version attributes. It is therefore possible to control an entire app with a single rule.




Rule conditions are properties of files that AppLocker uses to enforce rules. Each AppLocker rule can use one primary rule condition. The following three rule conditions are available in AppLocker:

  • Publisher rule conditions can only be used for files that are digitally signed by a software publisher. This condition type uses the digital certificate (publisher name and product name) and properties of the file (file name and file version). This type of rule can be created for an entire product suite, which allows the rule in most cases to still be applicable when the application is updated.
  • Path rule conditions are based on the file or folder installation path of specific applications.
  • File hash rule conditions are based on the unique file hash that Windows cryptographically computes for each file. This condition type is unique, so each time that a publisher updates a file, you must create a new rule.



In Windows Server 2008 R2 and Windows 7
AppLocker rules can be enforced on computers running Windows 7 Ultimate, Windows 7 Enterprise, or any edition of Windows Server 2008 R2 except Windows Web Server 2008 R2 and Windows Server 2008 R2 Foundation.
To create rules for a local computer, the computer must be running Windows 7 Ultimate or Windows 7 Enterprise. If you want to create rules for a Group Policy Object (GPO), you can use a computer that is running any edition of Windows 7 if the Remote Server Administration Tools are installed. AppLocker rules can be created on any edition of Windows Server 2008 R2. Although you can create AppLocker rules on computers running Windows 7 Professional, they will not be enforced on those computers. However, you can create the rules on a computer running Windows 7 Professional and then export the policy for implementation on a computer running an edition of Windows that does support AppLocker rule enforcement.
In Windows Server 2012 and Windows 8
AppLocker is supported on all Windows beta evaluation versions except the Server Core installation option.





The most common reasons why the AppLocker rules might not be enforced are:

  • The Application Identity service (AppIDsvc) is not running.
  • Rule enforcement is set to Audit only.



To view AppLocker events, you can use event forwarding technologies, Event Viewer (eventvwr.msc), or the Get-WinEvent Windows PowerShell cmdlet.
You can either use Remote Desktop to log on to a client computer or physically log on to that computer to view or collect the AppLocker events.
In Event Viewer, AppLocker events are stored in a log under: Applications and Services Logs\Microsoft\Windows\AppLocker. There are two child logs: one for executable files and DLLs and another for Windows Installer files and scripts.



When AppLocker is enabled, only applications that are specified will be allowed to run. When you first create rules, AppLocker will prompt you to create the default rules. These default rules ensure that key Windows system files and all files in the Program Files directory will be permitted to run. While the default rules are not mandatory, we recommend that you start with the default rules as a baseline and then edit them or create your own to ensure that Windows will function properly.
If computers cannot start properly due to your AppLocker policy, edit the AppLocker rules in the corresponding GPO to be less restrictive. If the AppLocker rules are defined in a computer's local policy, start the computer in Safe Mode, create the default AppLocker rules, and then restart the computer.


AppLocker provides Windows PowerShell cmdlets designed to streamline the administration of an AppLocker policy. They can be used to help create, test, maintain, and troubleshoot an AppLocker policy. The cmdlets are intended to be used in conjunction with the AppLocker user interface that is accessed through the Microsoft Management Console (MMC) snap-in extension to the Local Security Policy snap-in and Group Policy Management Console.


No. A virtual machine is a separate image. Therefore, the image cannot access the policy files of the computer that hosts the AppLocker policies.


Some tasks can be done by using Remote Desktop. AppLocker policies can be applied by using domain GPOs, local GPOs, or both. If a user requests access to an application, you can use one of the Windows PowerShell cmdlets, or you can use the Local Security Policy snap-in (secpol.msc) to add a local rule to temporarily allow that application. In both cases, you need to have administrator privileges. You can either do this by using Remote Desktop or by using the Windows PowerShell remote access capabilities.





There are three ways of doing this:
  • Back up the GPOs by using the GPMC (for domain policies only).
  • Export the AppLocker policy by using the AppLocker snap-in.
  • Create a script by using the Get-AppLockerPolicy Windows PowerShell cmdlet to export the policy.






Tuesday, 13 January 2015

Remote Management with PowerShell

Introduction

Let's  examine the remoting features in PowerShell 4.0 and explore the protocols, services, and configurations needed for remoting to function. There will be demonstrations to highlight how remoting works by getting information, creating objects, changing settings, and assigning user permissions to a group of computers remotely.

Windows PowerShell Remoting

Windows PowerShell remoting provides a method to transmit any command to a remote computer for local execution. The commands do not have to be available on the computer that originates the connection; it is enough if just the remote computers are able to execute the commands.
Windows PowerShell remoting relies on the web services managements (WS-Man) protocol. WS-Management is a distributed management task force (DMTF) open standard that depends on HTTP (or HTTPS) protocol. The Windows Remote Management (WinRM) service is the Microsoft implementation of WS-Management, WinRM is at the heart of Windows PowerShell remoting but this service can also be used by other non-PowerShell applications.
By default, WS-Man and PowerShell remoting use port 5985 and 5986 for connections over HTTP and HTTPS, respectively. This is much friendlier to network firewalls when compared to other legacy communication protocols such as the distributed component object model (DCOM) and remote procedure call (RPC), which use numerous ports and dynamic port mappings.
Remoting is enabled by default on Windows Server 2012 and it is required by the server manager console to communicate with other Windows servers, and even to connect to the local computer where the console is running. On client operating systems, such as Windows 7 or Windows 8, remoting is not enabled by default.
Once enabled, remoting registers at least one listener. Each listener accepts incoming traffic through either HTTP or HTTPS; listeners can be bound to one or multiple IP addresses. Incoming traffic specifies the intended destination or endpoint. These endpoints are also known as session configurations.
When traffic is directed to an endpoint, WinRM starts the PowerShell engine, hands off the incoming traffic, and waits for PowerShell to complete its task. PowerShell will then pass the results to WinRM, and WinRM handles the transmission of that data back to the computer that originated the commands.
While this article concentrates on the remoting feature of Windows PowerShell, it is worth noting that there are other remote connectivity protocols that are also used by specific PowerShell cmdlets. For instance, some cmdlets use the RPC protocol, others depend on the remote registry service. These numerous communication protocols demand additional configuration on the firewall to allow those PowerShell commands to be executed across the network.

 

Enabling PowerShell Remoting on a Local Computer

You may need to enable remoting on Windows clients, older Windows Server operating systems, or Windows Server 2012 if it has been disabled. However, keep in mind that remoting must be enabled only on computers that you will connect to; no configuration is needed on the computer from which you are sending the commands.
To manually enable remoting, run the Enable-PSremoting cmdlet as shown below:
 Figure 1
Running the Enable-PSremoting cmdlet makes the following changes to the computer:
  • Sets the WinRM service to start automatically and restart it.
  • Registers the default endpoints (session configurations) for use by Windows PowerShell.
  • Creates an HTTP listener on port 5985 for all local IP addresses.
  • Creates an exception in the Windows Firewall for incoming TCP traffic on port 5985.
If one or more network adapters in a computer are set to public (as an alternative to work or domain), you must use the –SkipNetworkProfileCheck parameter for the Enable-PSremotingcmdlet to succeed.
Running Get-PSSessionConfiguration exposes the endpoints created by Enable-PSremoting.

Figure 2

Enabling PowerShell Remoting Using Group Policy

If you have a large number of computers, configuring a group policy object (GPO) may be a better option to enable remoting than manually executing the Enable-PSremoting cmdlet in each system.
The order is not important, but the following three steps must be completed for the GPO to trickle down effectively and enable remoting on your domain computers:
  • Create a Windows firewall exception for the WinRM service on TCP port 5985
  • Allow the WinRM service to automatically listen for HTTP requests
  • Set the WinRM Service to start automatically

Create a windows firewall exception for the WinRM service on TCP port 5985


To create the firewall exception, use the Group Policy Management Console and navigate to Computer Configuration\Administrative Templates\Network\Network Connections \Windows Firewall\Domain Profile.  


Figure 3
Right-click the Windows Firewall: Define inbound program exceptions and select Edit

Click on Show and on the Show Contents dialog box; under Value enter the following line:5985:TCP:*:Enabled:WinRM as seen below:


Allow the WinRM service to automatically listen for HTTP requests

 Again using Group Policy Management, that setting can be located under Computer Configuration\Administrative Templates\Windows Components\Windows Remote Management (WinRM)\WinRM Service

Right-click Allow remote server management through WinRM and select Edit. Click on Enabled and specify the IPv4 and IPv6 filters, which define which IP addresses listeners will be configured on. You can enter the * wildcard to indicate all IP addresses

 Set the WinRM Service to start automatically

This setting can be found on Computer Configuration\Windows Settings\Security Settings\System Services\Windows Remote management (WS-Management).

Right-click Windows Remote management (WS-Management), select Properties and set the startup mode to “Automatic.”

Once all the preceding GPO settings are completed and the group policy is applied, your domain computers within the policy scope will be ready to accept incoming PowerShell remoting connections.

Using Remoting

There are two common options for approaching remoting with PowerShell. The first is known as one-to-one remoting, in which you make a single remote connection and a prompt is displayed on the screen where you can enter the commands that are executed on the remote computer. On the surface, this connection looks like an SSH or telnet session, even though it is a very different technology under the hood. The second option is called one-to-many remoting and it is especially suited for situations when you may want to run the same commands or scripts in parallel to several remote computers.

 

One-to-One Remoting (1:1)

The Enter-PSSession cmdlet is used to start a one-to-one remoting session. After you execute the command, the Windows PowerShell prompt changes to indicate the name of the computer that you are connected to. See figure below.


 

During this one-to-one session, the commands you enter on the session prompt are transported to the remote computer for execution. The commands’ output is serialized into XML format and transmitted back to your computer, which then deserializes the XML data into objects and carries them into the Windows PowerShell pipeline. At the session prompt, you are not limited to just entering commands, you can run scripts, import PowerShell modules, or add PSSnapins that are registered to the remote computer.
There are some caveats on this remoting feature that you need to be aware of. By default, WinRM only allows remote connections to the actual computer name; IP addresses or DNS aliases will fail. PowerShell does not load profile scripts on the remote computer; to run other PowerShell scripts; the execution policy on the remote computer must be set to allow it. If you use the Enter-PSSession cmdlet in a script, the script would run on the local machine to make the connection, but none of the script commands would be executed remotely because they were not entered interactively in the session prompt,

One-to-Many Remoting

With one-to-many remoting, you can send a single command or script to multiple computers at the same time. The commands are transported and executed on the remote computers, and each computer serializes the results into XML format before sending them back to your computer. Your computer deserializes the XML output into objects and moves them to the pipeline in the current PowerShell session.
The Invoke-Command cmdlet is used to execute one-to-many remoting connections. The -ComputerName parameter of the Invoke-Command accepts an array of names (strings); it can also receive the names from a file or get them from another source. For instance:
A comma-separated list of computers:
-ComputerName FS1,CoreG2,Server1
Reads names from a text file named servers.txt:
-ComputerName (Get-Content C:\Servers.txt)
Reads a CSV file named Comp.csv that has a computer column with computer names.
-ComputerName (Import-CSV C:\Comp.csv | Select –Expand Computer)
Queries Active Directory for computer objects
-ComputerName (Get-ADComputer –filter * | Select –Expand Name)
Here is an example of using remoting to obtain the MAC addresses of a group of computers:
<code>
 Invoke-Command -ComputerName FS1,CoreG2,Server1 -ScriptBlock `
{Get-NetAdapter |Select-Object -Property SystemName,Name,MacAddress |
Format-Table}
</code>
Here is the output:


Here is another example: Let’s say that you need to create a folder on each computer to store drivers and, at the same time, you want to assign full control permission to a domain user, named User1, to access the folder. Here is one way you could code the solution:
<code>
Invoke-Command -ComputerName Fs1,CoreG2,Server1,Win81A `
-ScriptBlock {New-Item -ItemType Directory -Path c:\Drivers
$acl = Get-Acl c:\Drivers
$User1P = "lanztek\User1","FullControl","Allow"
$user1A =New-Object System.Security.AccessControl.FileSystemAccessRule $User1P
$acl.SetAccessRule($User1A)
$acl | set-acl c:\Drivers}
</code>
The preceding script may be run from any accessible computer in the network. It creates a folder named “Drivers” on the root of the C drive on each one of the computers that it touches.
The $aclvariable stores the security descriptor of the Drivers folders; $User1P defines the permission level for User1 (full control). The $User1A variable holds a new object that defines an access rule for a file or directory. $User1A is used to modify the security descriptor ($acl). The last line of the script pipes the modified security descriptor ($acl) to the Set-Acl cmdlet. Finally, the Set-Acl cmdlet applies the security descriptor to the Drivers folder.
Once the scripts executes, you get immediate confirmation that the folder has been created on each one of the remote computers.



One-to-many remoting can be used again to verify that User1 has full control permission to the Drivers folder:
<code>
Invoke-Command -ComputerName Fs1,CoreG2,Server1,Win81A `
-ScriptBlock {get-acl c:\drivers |
Select-Object PSComputername,AccessToString}
</code>

By default, remoting connects up to 32 computers at the same time. If you include more than 32 computers, PowerShell starts working with the first 32 and queues the remaining ones. As computers from the first batch complete their tasks, the others are pulled from the queue for processing. It is possible to use the Invoke-Command cmdlet with the -ThrottleLimit parameter to increase or decrease that number.

 

Persistent PSSessions

When using Invoke-Command with the –ComputerName parameter, the remote computer creates a new instance of PowerShell to run your commands or scripts, sends the results back to you, and then closes the session. Each time Invoke-Command runs, even if it does to the same computers, a new session is created and any work done by a previous session will not be available in memory to the new connection. The same can be said when you use the Enter-PSSession with the –ComputerName parameter and then exit the connection by closing the console or using the Exit-PSSession command.
It is good to know that PowerShell has the capability to establish persistent connections (PSSessions) by using the New-PSSession cmdlet. The New-PSSession allows you to launch a connection to one or more remote computers and starts an instance of Windows PowerShell on every target computer. Then you run Enter-PSSession or Invoke Command with their –Session parameter to use the existing PSSession instead of starting a new session. Now you can execute commands on the remote computer and exit the session without killing the connection. Superb!
In the following example, the New-PSSession cmdlet is used to create four different PSSessions; the PSSessions are stored in a variable names $Servers. Get-Content reads the computer names from a text file named Servers.txt and pass that information to New-PSSession via the -ComputerName parameter.
<code>
$Servers = New-PSSession -ComputerName (Get-Content c:\Servers.txt)
</code>
After running the command, typing $Servers or Get-PSSession will allow you to confirm that the sessions have been created.

Closing Remarks

Remoting is a firewall-friendly feature that relies on the WS-Management (WS-Man) open standard protocol to function. Microsoft implements and manages WS-Man via the WinRM service. This article shows how to administer from a small to a large number of computers with Windows PowerShell remoting using interactive sessions, one-to-one, one-to-many, and persistent PSSessions. But wait, there’s more. We have not talked yet about multihop remoting, implicit remoting, managing non-domain computers, or PowerShell web access. Those and other topics will be explained and demonstrated in our next article in this series.

Windows Server 2012 R2 Essentials

Introduction

According to the U.S. Small Business Administration, there were around 28 million small businesses in this country as of 2013. Now, the SBA’s definition of “small” might be a bit different from mine; they apply that label to any business that has fewer than 500 employees. However, their statistics also show that over 19 million of these are sole proprietorships, a tax structure used by only the smallest businesses. Many of these are part-time businesses, but bring in enough revenue to have to file taxes.
Whether part-time or full-time, companies with 1 to 25 workers have many of the same needs as large companies – just on a smaller scale. They need email, web sites, and the ability to collaborate and share documents and information with others inside and outside of the company. They may need to be able to store records in a database, and they may need a way to conduct meetings remotely with colleagues, customers, vendors and others.
What they don’t need is the expense and headaches of a whole server room full of machines that need to be tended to on a full-time basis by someone with technical expertise. That’s why many of these small organizations found a solution in Microsoft’s Small Business Server (SBS), which had its origins in the Windows NT-based BackOffice Small Business Server that was introduced way back in 1997.

 

A brief history of SBS and Windows Server Essentials

The idea behind SBS was to take basically the same concept used by hardware makers to create multi-function machines (printer/scanner/fax/copier) and apply it to software. SBS in its various incarnations combined the Exchange email server with the SQL Server database server, Proxy Server or its successor ISA Server, and later SharePoint services. Earlier versions also included the Outlook mail client and FrontPage HTML editor. Different versions and editions supported from 25 to 75 users.
Microsoft refined and developed SBS through its final version, SBS 2011 (which came out in late 2010). Then, in the summer of 2012, they announced that they were discontinuing SBS. I can well remember the weeping, wailing and gnashing of teeth that occurred at that time; there was quite an uproar from SBS MVPs and small business IT admins who had come to depend on it as a relatively easy to deploy, cost effective “all in one” solution for small companies.
Microsoft announced, at the same time they broke the news about the demise of SBS, that its replacement would be Windows Server 2012 Essentials. Unfortunately this was more than just a name change; there was something big that was missing in WS2012 Essentials: Microsoft Exchange and the other server applications that came with SBS. Given that the cost savings of getting all of these server products in one low-priced package was the reason most customers deployed SBS in the first place, this didn’t sit well with most of them.
Microsoft’s answer to these complaints was that Windows Server Essentials, which is basically just the Windows Server operating system limited to 25 users, “allows customers the flexibility to choose which applications and services run on-premises and which run in the cloud.”  Of course small companies could buy a full copy of Exchange or SQL or SharePoint if they wanted to run those services on premises, but the cost would be far more than what they paid for SBS and the administrative overhead would be higher. Obviously, Microsoft’s “hidden agenda” (though not very well hidden) was to motivate small businesses to move their email and other server hosting needs to Office 365.

 

Where that leaves us today

Fast forward a couple of years to late 2014, and the cloud has gained much more acceptance. A cynic might surmise that small businesses have embraced it because they really had no other viable choice. But one can’t argue with the fact that cloud computing is beginning to mature and overcome some of the obstacles that made businesses and individuals hesitant to commit to it in earlier years.
Early concerns about security and reliability are slowly fading, as many small businesses have come to realize that the vast resources that public cloud providers have to put into securing their data centers makes cloud-hosted services, in most cases, more secure than the typical on-premises small business network.
Microsoft and Google are offering “three nines” (99.9% uptime) in their standard service level agreements (SLAs). This translates to no more than 8.76 hours of downtime per year (10.1 minutes per week), which often out-performs the reliability of small on-premises networks. There are other providers that can offer four or five nines (99.99 or 99.999% uptime) – at a higher cost, of course. This means considerably less downtime: just under 53 minutes and 5.26 minutes per year, respectively. Here is a table showing downtime for different service levels:

Availability %
Downtime per year
Downtime per month
Downtime per week
90%   ("one nine")
36.5 days
72 hours
16.8 hours
95%
18.25 days
36 hours
8.4 hours
97%
10.96 days
21.6 hours
5.04 hours
98%
7.30 days
14.4 hours
3.36 hours
99%   ("two nines")
3.65 days
7.20 hours
1.68 hours
99.5%
1.83 days
3.60 hours
50.4 minutes
99.8%
17.52 hours
86.23 minutes
20.16 minutes
99.9%   ("three nines")
8.76 hours
43.8 minutes
10.1 minutes
99.95%
4.38 hours
21.56 minutes
5.04 minutes
99.99%   ("four nines")
52.56 minutes
4.32 minutes
1.01 minutes
99.995%
26.28 minutes
2.16 minutes
30.24 seconds
99.999%   ("five nines")
5.26 minutes
25.9 seconds
6.05 seconds
99.9999%   ("six nines")
31.5 seconds
2.59 seconds
0.605 seconds
99.99999%   ("seven nines")
3.15 seconds
0.259 seconds
0.0605 seconds

Table 1
With companies accepting the advantages of hosted Exchange, SharePoint and Lync or going to business Gmail accounts if they don’t need those other services, Windows Server Essentials begins to make more sense for small businesses.

 

Introducing Windows Server Essentials

When Windows Server 2012 was released, it came in four different editions: Foundation, Datacenter, Standard and Essentials. Foundation edition, limited to 15 users and 50 RRAS connections, was only available to original equipment manufacturers (OEMs) and could not be bought at retail. Datacenter edition was available through volume licensing and OEMs. For small organizations, the choice was between the Standard and Essentials editions, both available through retail channels.
Essentials is limited to 25 users and 250 RRAS connections, whereas Standard supports an unlimited number of both. Standard edition also allows for many more processors and more RAM, and includes Active Directory Federation Services, Hyper-V and the ability to install in server core mode, none of which are supported by Essentials. Other than Hyper-V and perhaps server core, these are things that almost no small businesses would ever need.
In addition to a lower cost, one of the main benefits of Essentials is its simplified management, which can be done through a touch-friendly web interface. Essentials is also integrated with Office 365 to make it easy for small businesses to incorporate those services with their Active Directory. However, if the nature of your business (or your personal preference) dictates that you keep your email services on-premises, Essentials also integrates with Exchange 2013. Microsoft offered a supported migration path from SBS to Server 2012 Essentials plus Exchange 2013.
In November 2013, Microsoft released the R2 version of Windows Server Essentials, along with other editions of Windows Server 2012 R2. Interestingly, in Windows Server 2012 R2, the company provides the ability to install the “Windows Server Essentials Experience” as a server role when you install the Standard or Datacenter edition. What this does is give you the dashboard, remote web access and other features that were unique to the Essentials edition, but without the limitations on the number of users and connections and with the features (ADFS, Hyper-V, server core) that Server Essentials lacks.
Microsoft also introduced a number of new features and functionalities in the regular Server Essentials edition and made improvements to many of the existing features. Server and client deployment options were improved, and there are new functionalities for managing users and groups, storage, data protection and more. We will be looking at some of those additions and enhancements in Part 2 of this article.

 

Summary

When Windows Server Essentials first came out, there was a great deal of disappointment in the small business ranks, but both the consultants who deploy it for customers and the small companies themselves are now realizing that it has a lot to offer and can save them money, even though it doesn’t include all the on-premises server applications that were a part of SBS.
In this multi-part article, we’re delving into its benefits, its limitations and how it can be used to best advantage in some common small business scenarios. In Part 2, we’ll look a more detailed look at some of the enhanced and new features in Windows Server 2012 R2 Essentials that can give small business admins more flexibility and control over their networks.