Browsed by
Category: Windows

Server 2012/2016/2019 Deduplication Data – how check save data

Server 2012/2016/2019 Deduplication Data – how check save data

If you have a file server and you want get know how much data  you will be able save if  migrate to Server 2012 , copy from Windows Server 2012  from path c:\windows\system32\ you will found  tool ddpeval.exe

This tool only show you, how much data You will be able to save. This tool does not deduplicate ! Examples

ddpeval.exe \\dchv\d 

or

ddpeval.exe :d

————————————————————————————————————

In pictures below you can see how turn on deduplication on the server and result of real deduplication

FIRST  “enable deduplication data”  ( feature of file server role ),

start-DedupJob
get-DedupStatus

Deduplicate features works with local disks, volumes , no remote network store, remote network shares

Powershell script to import LDAP object into exchange contact

Powershell script to import LDAP object into exchange contact

Whith this powershell script, it’s possible to import external ldap object into active directory mail-contact.

Exchange will parse and create a list of contact available for everyone.

The script will clear all OU before importing.

$count = 0
#load Exchange pssnapin
Add-PSSnapIn Microsoft.Exchange.Management.PowerShell.E2010
#load Assembly DirectoryServices
[System.Reflection.Assembly]::LoadWithPartialName("System.DirectoryServices.Protocols") 
[System.Reflection.Assembly]::LoadWithPartialName("System.Net") 
#load user and password to logon in Openldap
$UserName = "uid=reader,ou=users,dc=example,dc=com"  
$Password = "Password"
$OU = "OU-IMPORT"

$filter = "(objectclass=inetOrgPerson)"
#Insert openLDAP source server and the OU of the company created in this openLDAP 
$domain = "LDAP://10.10.10.1:389/o="+$OU+",dc=example,dc=com"

#Launch the search in the openLDAP
$root = New-Object -TypeName System.DirectoryServices.DirectoryEntry($domain,$UserName,$Password,'FastBind')
$query = New-Object System.DirectoryServices.DirectorySearcher($root,$filter)
$objuser = $query.findall()

#search user by user in the openLDAP ou
foreach ($user in $objUser.GetEnumerator()) {
  
    #this counter is only a security counter and for testing porpouses, in case of you dont want to launch all users at the same time
    if ($count -ge 0) #insert the number of users you want to import
    { 
    write-host "-------------------------------------------------------"
    #select the mail of the user in openLDAP
    $smtpmail = [Microsoft.Exchange.Data.ProxyAddress]("$($user.properties.mail)")		

if(-not([string]::IsNullOrEmpty($smtpmail.SmtpAddress))) # check if the smtp field is not empty
    {
    
    $mail = $smtpmail.SmtpAddress        
    write-host $user.properties.cn
   
  If ([string]$user.properties.displayname -ne (Get-MailContact ([string]$user.properties.displayname) -ErrorAction silentlycontinue)) #check if the user exist in the AD yet   
	{
	    write-host "the contact doesnt exist, I create it"
    	#change the OU where the contacts will be created in your AD, changing "-organizationalunit" property
        New-MailContact -Name $user.properties.cn -DisplayName $user.properties.displayname -FirstName $user.properties.givenname -LastName $user.properties.sn -OrganizationalUnit ("OU="+$OU+",OU=LDAP-Tesa,DC=CGTE,DC=local") -ExternalEmailAddress $mail #-Alias $_.mailNickname
		Set-Mailcontact -identity ([string]$user.properties.displayname) -CustomAttribute10 $OU
		Set-Mailcontact -identity ([string]$user.properties.displayname) -CustomAttribute11 "updated"
    }
	Else
	{
	    write-host "the contact exist, I wait for a 2 secons"
		#Start-Sleep -s 15 #delay of 5 seconds to let AD to replicate the contact in the DCS servers
		Write-host "update contacts properties.... " $user.properties.displayname
        Set-Contact -identity ([string]$user.properties.displayname) -Phone $user.properties.telephonenumber -mobilePhone $user.properties.mobile -Office $user.properties.physicaldeliveryofficename -Title $user.properties.title -Department $user.properties.department -Company $user.properties.o -city $user.properties.l
	    Set-Mailcontact -identity ([string]$user.properties.displayname) -CustomAttribute10 $OU
		Set-Mailcontact -identity ([string]$user.properties.displayname) -CustomAttribute11 "updated"
	}
    
    
    
    }
    $count++
 }
}
#Remove contact not update, aka deleted from ldap
get-mailcontact -OrganizationalUnit ("OU="+$OU+",DC=example,DC=com") -filter {CustomAttribute11 -eq $null}|remove-mailcontact -Confirm:$false
Start-Sleep -s 30 #delay of 30 seconds to let AD to replicate the contact in the DCS servers
get-mailcontact -OrganizationalUnit ("OU="+$OU+",DC=example,DC=com") -filter {CustomAttribute11 -ne $null}|set-mailcontact -CustomAttribute11 ""
Start-Sleep -s 30 #delay of 30 seconds to let AD to replicate the contact in the DCS servers
Fixing Office 365 DirSync account matching issues

Fixing Office 365 DirSync account matching issues

DirSyncServerRecently I had to fix some issues with DirSync. For some reason (there were some cloud users created before DirSync was enabled) there were duplicate users, because DirSync failed to match the already present cloud user and the corresponding AD (Active Directory) user. There were also accounts that failed to sync and thus failed to sync all attributes properly.

If there is already a cloud account and there is need for a synced account, you can create an AD account in DirSynced OU’s. But be sure to create the user with a full UPN matching the one in Office 365 and SMTP addresses that are present on the Cloud account. With the next sync it should match both accounts. If not, it fails matching and you end up with either duplicate accounts (one cloud user and a DirSynced user with the same name/lastname/displayname) or get an InvalidSoftMatch.

When UPN/SMTP matching failed you can merge those accounts again by setting the ImmutableID on the Office 365 account (MsolUser) which is derived from the AD user’s ObjectGuid. You can only add this attribute to Office 365 accounts. After this is set, DirSync should match the accounts correctly.

So, how did I resolve this? See below:

When there are duplicates:

    • Remove user from DirSync (move to OU which is not synced, will only work when OU Filtering is used. If not, disable DirSync…).
    • Perform DirSync.
    • Remove duplicate synced user (NOT cloud user):
      • Remove-MSOLuser -UserPrincipalName <UPN> -RemoveFromRecycleBin
      • Add ImmutableID from AD user to Cloud user
        • $guid = (get-Aduser <username>).ObjectGuid
          $immutableID = [System.Convert]::ToBase64String($guid.tobytearray())
        • Connect to AD Azure (Connect-MSOLService when AD Azure Powershell Module is installed).
        • Set-MSOLuser -UserPrincipalName <clouduserUPN> -ImmutableID $immutableID
        • It’s possible that the clouduserUPN must be changed to the <tenant>.onmicrosoft.com format. It should be changed by DirSync to correspond with the AD UPN.
        • See also http://www.joseph-streeter.com/?p=423
    • Place account back in correct (synced) AD OU.
    • Manually kick off a sync on the DirSync Server if you don’t want to wait (up to 3 hours with default settings):
      • C:\Program Files\Windows Azure Directory Sync\DirSyncConfigShell.psc1
      • Start-OnlineCoexistenceSync

In my case it didn’t always match the accounts and was required to perform a Full DirSync (on DirSync server):

    • Via MIISClient, Management Agents:
      • C:\Program Files\Windows Azure Active Directory Sync\SYNCBUS\Synchronization Service\UIShell\missclient.exe
    • Be sure to be member of local group “FIMSyncAdmins”
      Names might be different depending on DirSync version
    • On the Windows Azure Active Directory Connector:
      • Properties>Run>Full Import Delta Sync
    • on the Active Directory Connector:
      • Properties>Run>Full Import Full Sync
  • Note that a Full Sync can take a long time if you have a lot of objects. Furthermore, changes can take a while to propagate in Office 365.
  • It might be necessary to edit an attribute (Description, office etc. Something that is synced), and then perform a (normal) sync.

When you have an InvalidSoftMatch (SMTP Address matching doesn’t work because SMTP address already exists in Cloud):

Within the MIISClient.exe on the DirSync server, you can check for errors. In this case the account wasn’t properly matched:

clip_image001

  • Add ImmutableID from AD user to Cloud user:
    • $guid = (get-Aduser <username>).ObjectGuid
    • $immutableID = [System.Convert]::ToBase64String($guid.tobytearray())
    • Connect to AD Azure (Connect-MSOLService when AD Azure Powershell Module is installed)
    • Set-MSOLuser -UserPrincipalName <clouduserUPN> -ImmutableID $immutableID
    • It’s possible that the clouduserUPN must be changed to the <tenant>.onmicrosoft.com format. It should be changed by DirSync to correspond with the AD UPN.
    • See also http://www.joseph-streeter.com/?p=423
  • Then perform a sync as described in the previous section.

In my case these procedures resolved my issues. But as always, use this information at your own risk. Best to make sure that you don’t end up in a situation like this Winking smile

See also:

One or more objects don’t sync when using the Azure Active Directory Sync tool http://support.microsoft.com/kb/2643629/en-us

How to use SMTP matching to match on-premises user accounts to Office 365 user accounts for directory synchronization http://support.microsoft.com/kb/2641663/en-us

How to delete snapshot with wmic

How to delete snapshot with wmic

Sometimes if you try to delete shadow copy with vssadmin you will fail, in this case you can use the following procedure.

 

  • Start an elevated commandline window
  • Type in wmic and press enter
  • wmic:root\cli is shown
  • Type in shadowcopy which will list the current shadow copies
  • Type in shadowcopy delete and confirm to delete the copies one after the other
  • To leave the WMI commandline type exit
Redirect http to https web page on IIS7

Redirect http to https web page on IIS7

Firstly install URL Rewrite from here.

Then open web.config of website you want redirect and add the following rule between the rules tag

<rule name=”HTTP to HTTPS redirect” enabled=”true” stopProcessing=”true”>

<match url=”(.*)” />

<conditions>

<add input=”{HTTPS}” pattern=”off” ignoreCase=”true” />

</conditions>

<action type=”Redirect” redirectType=”Found” url=”https://{HTTP_HOST}/{R:1}” />

</rule>

Removing WPAD from DNS block list

Removing WPAD from DNS block list

If you want to implement proxy auto discovery, you also need to create wpad dns record.
In Windows 2008 this host record are blocked and dns server will not resolve this query.
You need to issue the following command to remove the block: dnscmd /config /globalqueryblocklist

Adding a disk to a Windows 2008 Failover Cluster using cluster.exe commands

Adding a disk to a Windows 2008 Failover Cluster using cluster.exe commands

This isn’t specific to multi-site clustering, but I’ve certainly had to use this many times when adding devices to my multi-site clusters. Adding disks to a multi-site Windows 2008 cluster is not as easy as it should be. In Windows 2008, Microsoft has added some new “logic” while adding disk resources to a cluster. In Windows 2008, when you attempt to “Add a disk” through the cluster administrator GUI, the cluster does a quick check on the available disks to ensure that the disks are present on all nodes of the cluster before presenting this as an available disk in the Cluster Administrator GUI. This can be bad for geo-clusters as the disks are unlikely read/write enabled on all sites, causing the cluster GUI to display an error message:

You may also experience this same behavior when adding a disk resource to a 2008 cluster that you only want to have available to a single, or subset of nodes. This issue could also occur if you deleted a cluster disk resource from your multi-cluster and attempted to add it back in thru the cluster GUI. Because of this behavior, we need to work a little harder to add a disk into a cluster for these situations. To work around this issue, you have a couple of options. The first option would be to evict the offending node(s) from the cluster and then add the storage using the cluster administrator GUI. Yes, this might be a bit painful for some, but if your environment can handle evicting/adding nodes without impact, this is probably the easiest way to get these disks into the cluster.

After evicting the remote nodes, the cluster would then only check the disks from your local storage system on the local node and would see that the disks are viable for cluster use. Now using cluster GUI, when you attempt to add a disk, the error message no longer displays and you will now be presented with the options to add the disks into the cluster. Once you’ve added the disks into the cluster, you would then re-join the other nodes back into the cluster.

If evicting a node isn’t an option, you can manually add the disk into the cluster using cluster.exe commands. I wrote a little MSKB about how to do this for Windows 2000/2003 in MSKB 555312, and there are some slight differences in Windows 2008. Microsoft has renamed just about all of the cluster’s physical disk private properties for Longhorn so my KB isn’t quite accurate for 2008. To manually add a disk using cluster.exe in Windows 2008, you would do the following:

First, we create the empty resource with no private properties…this is the same first step as documented in 555312:

C:>cluster res “Disk Z:” /create /type:”Physical Disk” /group:”Available Storage”

This creates a resource of the Physical Disk type in the group named “Available Storage” with no private properties. Next, my favorite secret hidden private property in 2000/2003 Drive has been renamed in Windows 2008. It has been renamed to DiskPath and it is no longer a hidden property, so it isn’t top secret anymore. If you look at the private properties of a physical disk resource you’ll see:

C:>cluster res “Disk Z:” /priv

Listing private properties for ‘Disk Z:’:

T Resource Name Value
— ——————– —————————— ———————–
D Disk Z: DiskIdType 5000 (0x1388)
D Disk Z: DiskSignature 0 (0x0)
S Disk Z: DiskIdGuid
D Disk Z: DiskRunChkDsk 0 (0x0)
B Disk Z: DiskUniqueIds … (0 bytes)
B Disk Z: DiskVolumeInfo … (0 bytes)
D Disk Z: DiskArbInterval 3 (0x3)
S Disk Z: DiskPath
D Disk Z: DiskReload 0 (0x0)
D Disk Z: MaintenanceMode 0 (0x0)
D Disk Z: MaxIoLatency 1000 (0x3e8)

So now I can use this DiskPath value and Windows will magically figure out all of the other gory private properties for my disk using the mount point I specify in the DiskPath parameter. Notice in the above output the DiskSignature, DiskUniqueIds and DiskVolumeInfo fields are empty after creating the “empty” physical drive resource. Now when I use the DiskPath parameter, Windows will magically figure out these fields based on the mount point info provided. I’ve mounted this disk as my Z: drive, so here’s my command using the DiskPath parameter:

C:>cluster res “Disk Z:” /priv DiskPath=”Z:”

At this point, you would bring the disk online in the cluster and it fills out the rest of the private property values for the disk. After bringing the disk online, when you look at the resource’s private properties, it shows:

C:>cluster res “Disk Z:” /priv

Listing private properties for ‘Disk Z:’:

T Resource Name Value
— ——————– —————————— ———————–
D Disk Z: DiskIdType 0 (0x0)
D Disk Z: DiskSignature 4198681706 (0xfa42cc6a)
S Disk Z: DiskIdGuid
D Disk Z: DiskRunChkDsk 0 (0x0)
B Disk Z: DiskUniqueIds 10 00 00 00 … (132 bytes)
B Disk Z: DiskVolumeInfo 01 00 00 00 … (48 bytes)
D Disk Z: DiskArbInterval 3 (0x3)
S Disk Z: DiskPath
D Disk Z: DiskReload 0 (0x0)
D Disk Z: MaintenanceMode 0 (0x0)
D Disk Z: MaxIoLatency 1000 (0x3e8)

Notice that the DiskSignature, DiskUniqueIds and DiskVolumeInfo are now filled in for this disk. You’ll also notice that the DiskPath value has automatically been cleared…not sure why this occurs, but it seems that after the DiskPath value has resolved the other properties, the DiskPath is cleared. If you check the resource properties before bringing the disk online, you’ll see the DiskPath value set, but after bringing the cluster resource online, the DiskPath value is cleared and the signature, ID and volume fields are populated.

LUNs presented from a CLARiiON array to a host are read only

LUNs presented from a CLARiiON array to a host are read only

Knowledgebase Solution   

Environment:  Product: CLARiiON
Environment:  EMC SW: Navisphere Manager
Environment:  EMC SW: Replication Manager
Problem:  LUNs are read only when allocated to a host from Navisphere.
Problem:  LUNs presented to host are read only.
Change:  Customer added the LUN to a Storage Group from Navisphere.
Root Cause:  When a Replication Manager job runs, it leaves attributes on a LUN. If the LUN is mounted again using Replication Manager, will be correctly presented to the host. If the LUN is mounted using Navisphere, the attributes are not cleared and may present problems to hosts accessing the LUNs. 

Fix:  Follow these steps:
Run c:diskpart
DISKPART> select disk 4

(Select the appropriate disk, which in this case is 4.)
 
DISKPART> detail disk

PowerDevice by PowerPath
Disk ID: 9F0B09CD
Type : FIBRE
Bus : 0
Target : 1
LUN ID : 3

Volume ### Ltr Label Fs Type Size Status Info
———- — ———– —– ———- ——- ——— ——–
Volume 5 M SQL 2005 MD NTFS Partition 10 GB Healthy
 
Verify the Disk ID is set correctly. Note the Volume number as well and use it in the following command:

DISKPART> select volume 5
DISKPART> detail volume

Disk ### Status Size Free Dyn Gpt
——– ———- ——- ——- — —
* Disk 4 Online 10 GB 0 B

Readonly : Yes
Hidden : No
No Default Drive Letter : Yes
Shadow Copy : Yes
 
If any of the Read Only, Hidden, or No Default drive letters are set to Yes, clear them with the follow command:

DISKPART> att vol clear readonly hidden nodefaultdriveletter

Volume attributes cleared successfully.
DISKPART>exit
 
Notes:  The att vol clear readonly hidden nodefaultdriveletter command clears the attributes set for the LUN presented under Windows.
Notes:  Drive needed to be rescanned in Device Manager before it was represented to the host.
Notes:  The LUNs should be added to a Storage Group using Replication Manager.