How to use the DNS translation feature

How to use the DNS translation feature

Description
The DNS translation feature available in the FortiOS firmware is designed to modify the DNS reply from a DNS server.

It is typically used to allow internal users of a network to access resources with their private IP addresses, hence can simplify the firewall configurations.

A network diagram is provided below with an example that illustrates on how to configure this feature.

In this example, the client sends a DNS resolution request to the DNS server 172.31.17.252 for resource “server1.lab.mycompany.com” . The DNS reply sent by the DNS server is 172.31.17.37 (this is the public IP address of “server1”), but the reply is translated on the FortiGate unit into 10.73.1.37, which is the private IP address of the same resource, “server1”.

 

Solution
How internal users can access internal resources via an external VIP

How internal users can access internal resources via an external VIP

Products
FortiGate
Description
This article describes a solution for the following requirement :

A user located to an internal LAN needs to access a server located on an internal LAN or DMZ by using however a public Virtual IP on the Fortigate.
External users also access the same server via the “external” port.

The following diagram illustrates this scenario :

port3
10.67.2.82 10.67.0.176 port1
[ INTERNAL USER ] ====|==== [ FortiGate ] == VIP 172.31.16.164 == [ EXTERNAL USERS ]
| | 172.31.19.113
[ SERVER1 ] ==========| | port2
10.67.0.164 |10.67.0.176
|
[ SERVER2 ] ==========================|
10.91.3.113

The INTERNAL USER PC is accessing the SERVER1 and SERVER2 with destination IP = 172.31.16.164 or 172.31.19.113, which in turn gets translated to the real servers IP and routed back to the servers.
Solution
FortiGate configuration excerpt :
========================

config firewall vip
edit “SERVER1”
set extip 172.31.16.164
set extintf “any” <<< Specifying “any” is a requirement
set mappedip 10.67.0.164
next
end
edit “SERVER2”
set extip 172.31.19.113
set extintf “any” <<< Specifying “any” is a requirement set mappedip 10.91.3.113 next config firewall policy edit 4 set srcintf “port1” set dstintf “port3” set srcaddr “all” set dstaddr “SERVER1” set action accept set schedule “always” set service “ANY” next edit 3 set srcintf “port3” set dstintf “port3” set srcaddr “all” set dstaddr “SERVER1” set action accept set schedule “always” set service “ANY” next edit 5 set srcintf “port1” set dstintf “port2” set srcaddr “all” set dstaddr “SERVER2” set action accept set schedule “always” set service “ANY” next edit 6 set srcintf “port3” set dstintf “port2” set srcaddr “all” set dstaddr “SERVER2” set action accept set schedule “always” set service “ANY” next end Note : policy 4 and 5 are used for external users ; policy 3 and 6 are used for internal users. Traffic flow snippet an HTTP session from USER to SERVER1: ============================================== FGT-1 (root) # diagnose sniffer packet any “host 10.67.2.82 or host 10.67.0.164” 4 interfaces=[any] filters=[host 10.67.2.82 or host 10.67.0.164] 6.798488 port3 in 10.67.2.82.2080 -> 172.31.16.164.8090: syn 391946722
6.798556 port3 out 10.67.0.176.26756 -> 10.67.0.164.80: syn 391946722
6.798856 port3 in 10.67.0.164.80 -> 10.67.0.176.26756: syn 3548167716 ack 391946723
6.798873 port3 out 172.31.16.164.8090 -> 10.67.2.82.2080: syn 3548167716 ack 391946723
6.799125 port3 in 10.67.2.82.2080 -> 172.31.16.164.8090: ack 3548167717
6.799131 port3 out 10.67.0.176.26756 -> 10.67.0.164.80: ack 3548167717

Note : for this traffic (port3 to port3), even though NAT is not enabled on the policy, the source IP address gets translated with the Fortigate internal IP address.

Session list for an HTTP session from USER to SERVER1
==========================================

FGT-1 (root) # diagnose sys session list

session info: proto=6 proto_state=05 duration=2 expire=0 timeout=3600 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=5
origin-shaper=
reply-shaper=
per_ip_shaper=
ha_id=0 hakey=758
policy_dir=0 tunnel=/
state=may_dirty
statistic(bytes/packets/allow_err): org=3407/13/1 reply=18467/19/1 tuples=4
orgin->sink: org pre->post, reply pre->post dev=4->4/4->4 gwy=10.67.0.164/10.67.2.82
hook=pre dir=org act=dnat 10.67.2.82:2075->172.31.16.164:8090(10.67.0.164:80)
hook=post dir=org act=snat 10.67.2.82:2075->10.67.0.164:80(10.67.0.176:26815)
hook=pre dir=reply act=dnat 10.67.0.164:80->10.67.0.176:26815(10.67.2.82:2075)
hook=post dir=reply act=snat 10.67.0.164:80->10.67.2.82:2075(172.31.16.164:8090)
pos/(before,after) 0/(0,0), 0/(0,0)
misc=0 policy_id=3 id_policy_id=0 auth_info=0 chk_client_info=0 vd=0
serial=0080d305 tos=ff/ff ips_view=0 app_list=0 app=0
dd_type=0 dd_rule_id=0
per_ip_bandwidth meter: addr=10.67.2.82, bps=19399

Traffic flow snippet an HTTP session from USER to SERVER2:
==============================================

FGT1# diagnose sniffer packet any “host 10.67.2.82 or host 10.91.3.113 and port 80” 4

4.741440 port3 in 10.67.2.82.2726 -> 172.31.19.113.80: syn 53278201
4.741515 port2 out 10.67.2.82.2726 -> 10.91.3.113.80: syn 53278201
4.741697 port2 in 10.91.3.113.80 -> 10.67.2.82.2726: syn 153837872 ack 53278202
4.741722 port3 out 172.31.19.113.80 -> 10.67.2.82.2726: syn 153837872 ack 53278202
4.742024 port3 in 10.67.2.82.2726 -> 172.31.19.113.80: ack 153837873
4.742033 port2 out 10.67.2.82.2726 -> 10.91.3.113.80: ack 153837873
4.742917 port3 in 10.67.2.82.2726 -> 172.31.19.113.80: psh 53278202 ack 153837873
4.742924 port2 out 10.67.2.82.2726 -> 10.91.3.113.80: psh 53278202 ack 153837873
4.743306 port2 in 10.91.3.113.80 -> 10.67.2.82.2726: ack 53279042
4.743315 port3 out 172.31.19.113.80 -> 10.67.2.82.2726: ack 53279042

Session list for an HTTP session from USER to SERVER2
==========================================

session info: proto=6 proto_state=01 duration=2 expire=3597 timeout=3600 flags=00000000 sockflag=00000000 sockport=0 av_idx=0 use=3
origin-shaper=
reply-shaper=
per_ip_shaper=
ha_id=0 hakey=1475
policy_dir=0 tunnel=/
state=may_dirty
statistic(bytes/packets/allow_err): org=92/2/1 reply=52/1/1 tuples=2
orgin->sink: org pre->post, reply pre->post dev=4->3/3->4 gwy=10.91.3.113/10.67.2.82
hook=pre dir=org act=dnat 10.67.2.82:2752->172.31.19.113:80(10.91.3.113:80)
hook=post dir=reply act=snat 10.91.3.113:80->10.67.2.82:2752(172.31.19.113:80)
pos/(before,after) 0/(0,0), 0/(0,0)
misc=0 policy_id=6 id_policy_id=0 auth_info=0 chk_client_info=0 vd=0
serial=0082338c tos=ff/ff ips_view=0 app_list=0 app=0
dd_type=0 dd_rule_id=0
per_ip_bandwidth meter: addr=10.67.2.82, bps=4049

Fixing Office 365 DirSync account matching issues

Fixing Office 365 DirSync account matching issues

DirSyncServerRecently I had to fix some issues with DirSync. For some reason (there were some cloud users created before DirSync was enabled) there were duplicate users, because DirSync failed to match the already present cloud user and the corresponding AD (Active Directory) user. There were also accounts that failed to sync and thus failed to sync all attributes properly.

If there is already a cloud account and there is need for a synced account, you can create an AD account in DirSynced OU’s. But be sure to create the user with a full UPN matching the one in Office 365 and SMTP addresses that are present on the Cloud account. With the next sync it should match both accounts. If not, it fails matching and you end up with either duplicate accounts (one cloud user and a DirSynced user with the same name/lastname/displayname) or get an InvalidSoftMatch.

When UPN/SMTP matching failed you can merge those accounts again by setting the ImmutableID on the Office 365 account (MsolUser) which is derived from the AD user’s ObjectGuid. You can only add this attribute to Office 365 accounts. After this is set, DirSync should match the accounts correctly.

So, how did I resolve this? See below:

When there are duplicates:

    • Remove user from DirSync (move to OU which is not synced, will only work when OU Filtering is used. If not, disable DirSync…).
    • Perform DirSync.
    • Remove duplicate synced user (NOT cloud user):
      • Remove-MSOLuser -UserPrincipalName <UPN> -RemoveFromRecycleBin
      • Add ImmutableID from AD user to Cloud user
        • $guid = (get-Aduser <username>).ObjectGuid
          $immutableID = [System.Convert]::ToBase64String($guid.tobytearray())
        • Connect to AD Azure (Connect-MSOLService when AD Azure Powershell Module is installed).
        • Set-MSOLuser -UserPrincipalName <clouduserUPN> -ImmutableID $immutableID
        • It’s possible that the clouduserUPN must be changed to the <tenant>.onmicrosoft.com format. It should be changed by DirSync to correspond with the AD UPN.
        • See also http://www.joseph-streeter.com/?p=423
    • Place account back in correct (synced) AD OU.
    • Manually kick off a sync on the DirSync Server if you don’t want to wait (up to 3 hours with default settings):
      • C:\Program Files\Windows Azure Directory Sync\DirSyncConfigShell.psc1
      • Start-OnlineCoexistenceSync

In my case it didn’t always match the accounts and was required to perform a Full DirSync (on DirSync server):

    • Via MIISClient, Management Agents:
      • C:\Program Files\Windows Azure Active Directory Sync\SYNCBUS\Synchronization Service\UIShell\missclient.exe
    • Be sure to be member of local group “FIMSyncAdmins”
      Names might be different depending on DirSync version
    • On the Windows Azure Active Directory Connector:
      • Properties>Run>Full Import Delta Sync
    • on the Active Directory Connector:
      • Properties>Run>Full Import Full Sync
  • Note that a Full Sync can take a long time if you have a lot of objects. Furthermore, changes can take a while to propagate in Office 365.
  • It might be necessary to edit an attribute (Description, office etc. Something that is synced), and then perform a (normal) sync.

When you have an InvalidSoftMatch (SMTP Address matching doesn’t work because SMTP address already exists in Cloud):

Within the MIISClient.exe on the DirSync server, you can check for errors. In this case the account wasn’t properly matched:

clip_image001

  • Add ImmutableID from AD user to Cloud user:
    • $guid = (get-Aduser <username>).ObjectGuid
    • $immutableID = [System.Convert]::ToBase64String($guid.tobytearray())
    • Connect to AD Azure (Connect-MSOLService when AD Azure Powershell Module is installed)
    • Set-MSOLuser -UserPrincipalName <clouduserUPN> -ImmutableID $immutableID
    • It’s possible that the clouduserUPN must be changed to the <tenant>.onmicrosoft.com format. It should be changed by DirSync to correspond with the AD UPN.
    • See also http://www.joseph-streeter.com/?p=423
  • Then perform a sync as described in the previous section.

In my case these procedures resolved my issues. But as always, use this information at your own risk. Best to make sure that you don’t end up in a situation like this Winking smile

See also:

One or more objects don’t sync when using the Azure Active Directory Sync tool http://support.microsoft.com/kb/2643629/en-us

How to use SMTP matching to match on-premises user accounts to Office 365 user accounts for directory synchronization http://support.microsoft.com/kb/2641663/en-us

Create a MySQL Slave using Replication with No Downtime

Create a MySQL Slave using Replication with No Downtime

2013060316014742_1

I have a customer who has over 100GB of MySQL data and taking their site down for even a few minutes is not feasible. I really wanted to get a slave set up in case the main server ever dies. Even though the server is backed up, it would take 2-3 hours (or longer) to restore the MySQL server which is not very acceptable.

The solution is to use replication. The traditional problem with this approach is locking the tables for so long while the mysqldump happens… for a database this size, close to 4-5 hours.

Idera’s Free Tool called Linux Hot Copy (hcp) was the answer I was looking for. By using hcp, you can lock the tables, make a near instant “snapshot”, record the master position, and unlock the tables. At your leisure, just copy the snapshot of the mysql data to your slave device, and start up your replication! This makes setting up new slaves a snap with minimal impact on your business.

First off, I will assume you have a production MySQL server in use and running. In my scenario, I am using CentOS 5.6 64Bit and MySQL 5.5. This tutorial will probably will work for older versions as well. I also will assume you know how to edit and copy files at the linux command line. If you don’t, you probably should get help from an experienced system administrator.

If you have not done so already, set up another mysql server for your slave. It should be a decent server, equal to your current live production server so you can switch to it in the event of failure.

I will also assume:

master server = 192.168.1.100
slave server = 192.168.2.200

You’ll need to substitute your IP Addressess in place of mine.

On Master Server (192.168.1.100):

1. Install Linux Hot Copy. Linux Hot Copy. If you need help with installation, here’s some documentation

2. Setup your Server ID and enable bin-logs. Note that bin logs record every change to your database, so make sure you have ample space to continue!)

Edit your /etc/my.cnf file and put these lines at the top, just under the [mysqld] line.

# enable mysql bin logs and server-id for mysql replication
       log-bin=mysql-bin
       server-id=1

Restart MySQL so bin logs are started. e.g. /etc/init.d/mysql restart you can verify it’s working by issuing the show master statusG command.

3. Create a user that has replication privs on the Master Server.

mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.2.200' IDENTIFIED BY 'password';

4. The next few steps will need to be done quickly so that you minimize your mysql server’s downtime. Make sure you know up-front the device (e.g. /dev/sda2) where your MySQL installation is located (typically /var/lib/mysql on CentOS):

Lock your Master MySQL Tables and show the status location of the bin log….

mysql> FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;

Make sure you record and copy the information down, e.g. the filename and log position

From the command line, enter the following command, replacing /dev/sda2 with your raw device:

hcp /mnt/snap /dev/sda2

Back to MySQL, unlock your tables:

mysql> unlock tables;

Now you have a perfect copy of your “frozen” data at the following location: (may vary)..

FROZEN DATA LOCATION:
/mnt/snap

On Slave Server: 192.168.2.200

On the slave server, make sure MySQL is stopped and move the old mysql folder: (make sure this is the SLAVE SERVER 192.168.2.200 and NOT the live server!):

/etc/init.d/mysql stop
mv /var/lib/mysql /var/lib/mysql.old

Back on the Master Server: 192.168.1.100

1. Copy the “frozen” mysql data:

rsync -avz /mnt/snap  [email protected]:/var/lib/mysql

2. Copy my.cnf to slave:

scp /etc/my.cnf [email protected]_or_host:/etc/my.cnf

3. Once the Copy is Complete you can delete your “hot copy”

hcp -r /dev/hcp1

Now, go to your Slave Server: 192.168.2.200

1. edit /etc/my.cnf and change server-id to 2 and comment out or delete the log-bin line you added from the master..

2. start up mysql, and then enter commands to connect to master.. replacing the log file and position number with the ones you recorded earlier:

mysql> CHANGE MASTER TO 
      MASTER_HOST='192.168.1.100', 
      MASTER_USER='repl', 
      MASTER_PASSWORD='password', 
      MASTER_LOG_FILE='mysql-bin.000001',
      MASTER_LOG_POS=12345678;
mysql> START SLAVE;

 mysql> SHOW SLAVE STATUS/G;

MySQL will show how far it’s behind, it might take a few minutes to catch up depending on the number of changes that happened to your database during the copy.

I hope you enjoyed this tutorial on MySQL Replication with no downtime. Now it’s easy!

Linux shadow copy

Linux shadow copy

How to add shadowcopy to linux.

There’s a software that add shadowcopy to linux; below are instruction for Centos 6

First of all you must download and install the two package below

idera-hotcopy-5-14-4-x86_64

r1soft-setup-5-14-4-x86_64

Than you must install kernel-headers and kernel-devel for your running version.

If you’re running latest version you can use

yum install kenel-header kernel-devel

otherwise you can install directly from vault.centos.org repository as below

rpm -ivh http://vault.centos.org/6.6/updates/x86_64/Packages/kernel-devel-2.6.32-504.1.3.el6.x86_64.rpm
rpm -ivh http://vault.centos.org/6.6/updates/x86_64/Packages/kernel-headers-2.6.32-504.1.3.el6.x86_64.rpm

Now that you have software e kernel-headers you build your specific kernel-module for hcp device.

launch r1soft-setup-old –get-module to build it

now you can try to create shadow copy with

hcp /mnt/snapshot /dev/sda1

this will create a shadowcopy of /dev/sda1 on /mnt/snapshot

how to update CA root on Centos

how to update CA root on Centos

For RHEL 6 or later, you should be using update-ca-trust, as lzap describes in his answer below.

— For older versions of Fedora, CentOS, Redhat:

Curl is using the system-default CA bundle is stored in /etc/pki/tls/certs/ca-bundle.crt . Before you change it, make a copy of that file so that you can restore the system default if you need to. You can simply append new CA certificates to that file, or you can replace the entire bundle.

Are you also wondering where to get the certificates? I (and others) recommend curl.haxx.se/ca . In one line:

curl https://curl.haxx.se/ca/cacert.pem -o /etc/pki/tls/certs/ca-bundle.crt
How to solve (some) graphical issues with putty, UTF8, and ncurses

How to solve (some) graphical issues with putty, UTF8, and ncurses

Hello everybody,

I’m writing this article to help all those people that may have had problems with text garbled, mismatched or other kind of graphical issues with all those software that uses the famous ncurses libraries (libncurses5). It all started when I was using (via puTTY) my favorite command line log parsing tool: the great  multitail (go out there and take it if you don’t know it) I started noticing some odd errors: part of the text was garbled, some of the lines were wrong in size or were substituted by wrong characters, as you can see in the screenshot:

strange-behaviour

multitail in a centOS environment

This problem happened when using puTTY on a CentOS 6.6 system, with locale set on UTF-8,  libncurses version 5.x and multitail at 6.4.1

This is the result of multiple problems and some steps are required to fix all the issues :

  1.  Download the latest version of puTTY (0.64 as of today)
  2. Make sure that under Windows -> Translation  and Connection -> Data you have everything as in the images :
    putty-screenshot1

    Remote character set: UTF-8 and “use Unicode line drawing code points”
    screenshot2Terminal-type string: putty
  3. Then, you have to set an environment variable to tell the ncurses libraries to use UTF-8 :

export NCURSES_NO_UTF8_ACS=1

you should also make it stick (echo export NCURSES_NO_UTF8_ACS=1 >> ~/.bashrc )

This should solve all your issues with UTF-8 and the ncurses libraries.

How to copy with cp to include hidden files and hidden directories and their contents?

How to copy with cp to include hidden files and hidden directories and their contents?

bash itself has a good solution, it has a shell option, You can cp, mv and so on.:

shopt -s dotglob # for considering dot files (turn on dot files)

and

shopt -u dotglob # for don't considering dot files (turn off dot files)

above solution standards of bash

NOTE:

shopt # without argument show status of all shell options
-u # abbrivation of unset 
-s # abbrivation of set