Friday, May 17, 2013

Splitting a FlexClone volume from its parent


If you want the FlexClone volume to have its own disk space, rather than using that of its parent, you can split it from its parent using the commands mentioned below.

prod_filer_h2> snap list vmdk_vol
Volume vmdk_vol
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
 31% (29%)   30% (27%)  Sep 24 22:46  clone_qa_vmdk_vol.1 (busy,vclone)

prod_filer_h2> snap delete vmdk_vol clone_qa_vmdk_vol.1
Snapshot clone_qa_vmdk_vol.1 is busy because of LUN clone, snapmirror, sync mirror, volume clone, snap restore, dump, CIFS share, volume copy, ndmp, WORM volume, SIS Clone

prod_filer_h2> vol status qa_vmdk_vol
         Volume State           Status            Options
        qa_vmdk_vol online          raid_dp, flex     nosnap=on, no_atime_update=on, maxdirsize=18350,
                                64-bit            guarantee=none
                Clone, backed by volume 'vmdk_vol', snapshot 'clone_qa_vmdk_vol.1'
                         Volume UUID: 2dd22882-06bb-11e2-9ef8-123478563412
                Containing aggregate: 'aggr0'

prod_filer_h2> vol clone split start qa_vmdk_vol
Clone volume 'qa_vmdk_vol' will be split from its parent.
Monitor system log or use 'vol clone split status' for progress.

The clone-splitting operation begins. All existing Snapshot copies of the clone are deleted, and the creation of Snapshot copies of the clone is prevented for the duration of the split operation.

Note: If an online data migration operation is in progress, this command might fail. In this case, wait and retry the command when the online data migration operation is complete.

prod_filer_h2> Fri May 17 01:23:06 EDT [prod_filer_h2:wafl.volume.clone.split.started:info]: Clone split was started for volume qa_vmdk_vol
Fri May 17 01:23:06 EDT [prod_filer_h2:wafl.scan.start:info]: Starting volume clone split on volume qa_vmdk_vol.

prod_filer_h2> vol clone split status qa_vmdk_vol
Volume 'qa_vmdk_vol', 2489 of 9863175 inodes processed (0%)
        12104546 blocks scanned. 6739818 blocks updated.

prod_filer_h2> vol clone split status qa_vmdk_vol
Volume 'qa_vmdk_vol', 3971 of 9863175 inodes processed (0%)
        19258602 blocks scanned. 13389935 blocks updated.

prod_filer_h2> vol clone split status qa_vmdk_vol
Volume 'qa_vmdk_vol', 4364 of 9863175 inodes processed (0%)
        23202794 blocks scanned. 15088433 blocks updated.

prod_filer_h2> snap list vmdk_vol
Volume vmdk_vol
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
 31% (29%)   30% (27%)  Sep 24 22:46  clone_qa_vmdk_vol.1 (busy,vclone)

prod_filer_h2> vol clone split status qa_vmdk_vol
Volume 'qa_vmdk_vol', 11615 of 9863175 inodes processed (0%)
        32004621 blocks scanned. 23269747 blocks updated.

prod_filer_h2> vol clone split status qa_vmdk_vol
vol clone split status: The volume is not a clone

prod_filer_h2> snap list vmdk_vol
Volume vmdk_vol
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
 30% (30%)   28% (28%)  Sep 24 22:46  clone_qa_vmdk_vol.1

prod_filer_h2> snap delete vmdk_vol clone_qa_vmdk_vol.1

prod_filer_h2> Fri May 17 03:51:30 EDT [prod_filer_h2:wafl.snap.delete:info]: Snapshot copy clone_qa_vmdk_vol.1 on volume vmdk_vol NetApp was deleted by the Data ONTAP function snapcmd_delete. The unique ID for this Snapshot copy is (8, 1283708).

Tuesday, January 24, 2012

Extract client IP Adresses from the listener.log file.

The listener log file is a simple text file, so searching for specific information inside is easy; however, in its raw
form, it’s difficult to extract collated information.

The simple and best way to do that is the widely used and humble linux commands: grep, awk, uniq, sort, wc etc.

If you are not sure, you can find the location of the listener log file by using the listener control utility:

[oracle@testoradb diag]$ lsnrctl status

LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 19-JAN-2012 07:55:07

Copyright (c) 1991, 2010, Oracle. All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date 22-AUG-2011 01:03:21
Uptime 150 days 7 hr. 51 min. 46 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/apps/oracle/testoradb/product/11.2.0/dbhome/network/admin/listener.ora
Listener Log File /u01/apps/oracle/testoradb/diag/tnslsnr/listener/alert/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=testoradb)(PORT=1521)))
Services Summary...
Service "ORA_TEST" has 1 instance(s).
Instance "ORA_TEST", status READY, has 1 handler(s) for this service...
Service "testoraXDB" has 1 instance(s).
Instance "ORA_TEST", status READY, has 1 handler(s) for this service...
The command completed successfully
[oracle@testoradb diag]$

Note the line that shows “Listener Log File,” which shows the directory of the listener log file.

[oracle@testoradb diag]$ cat listener.log | less
Mon Jul 18 02:56:26 2011
18-JUL-2011 02:56:26 * (CONNECT_DATA=(SERVICE_NAME=ORA_TEST)(CID=(PROGRAM=xHSSrv)(HOST=TEST_APP)(USER=apple))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.100.10)(PORT=44726)) *

establish * ORA_TEST * 0

The field Protocol Information has the following subfields:
PROTOCOL — the protocol that the client has used to connect, such as TCP.
HOST — the IP address of the client machine.
PORT — the port number established by the listener. (Note: It’s not the port number to which the listener is
listening, so this is not especially interesting to us.)

We can further narrow down on the IP addresses with the # of times they connected to the database on 18-Jan-2011 by:

[oracle@testoradb diag]$ cat listener.log | grep 18-JAN | grep CONNECT | awk -F* '{print $3}' | grep -o "192.*)" | grep -v 192.168.100.99 | awk -FPORT '{print $1}' | sort | uniq -c
5 192.168.100.32)(
255 192.168.100.33)(
19 192.168.100.34)(
60 192.168.100.56)(
11 192.168.100.58)(
1 192.168.100.62)(
6 192.168.100.71)(
9 192.168.100.163)(
1 192.168.100.164)(
12 192.168.100.165)(
5 192.168.100.166)(
2 192.168.100.167)(
2 192.168.100.169)(
[oracle@testoradb diag]$

It's rough but the info is here. After extracting the client IPs from the listener log file we then exclude the monitoring system's IP (192.168.100.99) from the list.

Cheers !
Harish.

Thursday, January 12, 2012

Creating ramdisk in Linux

A RAMDisk is a portion of RAM which is being used as if it were a disk drive. RAMDisks have fixed sizes, and act like regular disk partitions. Access time is much faster for a RAMDisk than for a real, physical disk. However, any data stored on a RAMDisk is lost when the system is shut down or powered off.

RAMDisks can be a great place to store temporary data. Perfect candidates would be :

1) Mounting Loopback file systems (such as run-from-floppy/CD distributions),
2) Working on unencrypted data from encrypted documents,
3) In many embedded Linux systems, a RAMDisk is used to load initrd (initial Ram Disk), initrd is the final root file system.
4) Things which do not change eg. web images or downloadable files, etc.

The Linux kernel version 2.4 or later have built-in support for ramdisks. You can check if ramdisk is setup by doing:

[root@test-db]# dmesg | grep RAMDISK
RAMDISK driver initialized: 16 RAM disks of 16384K size 1024 blocksize

You should get above output on CentOS and RHEL. Other linux flavors will have similar output as well.

1) Changing the Kernel Parameters:

Ramdisk size is controlled by a command-line option that is passed to the kernel during boot. Since GRUB is the default bootloader for CentOs 6.2, I will modify /etc/grub.conf with the new kernel option. The kernel option for ramdisk size is: ramdisk_size=xxxxxx, where xxxxxx is the size expressed in 1024-byte blocks.

Here is what I will add to /etc/grub.conf to configure 4 GB ramdisks (ramdisk_size=4194304):

[root@test-db]# vi /etc/grub.conf

Find the line which looks similar to following:

kernel /vmlinuz-2.6.32-220.el6.x86_64 ro root=/dev/mapper/vg_lvm-lv_root

add ramdisk_size=4194304 to the end of the line. Now your grub.conf should look like:

--------------------------------------------------------------------------------------
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE: You have a /boot partition. This means that
# all kernel and initrd paths are relative to /boot/, eg.
# root (hd0,0)
# kernel /vmlinuz-version ro root=/dev/mapper/vg_lvm-lv_root
# initrd /initrd-[generic-]version.img
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title CentOS (2.6.32-220.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-220.el6.x86_64 ro root=/dev/mapper/vg_lvm-lv_root rd_LVM_LV=vg_lvm/lv_swap rd_NO_LUKS LANG=en_US.UTF-8 rd_LVM_LV=vg_lvm/lv_roo
t rd_NO_MD quiet SYSFONT=latarcyrheb-sun16 rhgb crashkernel=auto KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM ramdisk_size=4194304
initrd /initramfs-2.6.32-220.el6.x86_64.img
--------------------------------------------------------------------------------------

Save and exit grub.conf. At this point you have it configured to have ramdisk with new size but it does not take effect until you reboot your system.

Once you have rebooted your system, we can start doing rest of configurations.

2) Format the ramdisk :
There is no need to format the ramdisk as a journaling file system, so we will simply use the ubiquitous ext2 file system. I only want to use one ramdisk, so I will only format /dev/ram0:

[root@test-db]# mke2fs -m 0 /dev/ram0

The -m 0 option keeps mke2fs from reserving any space on the file system for the root user, which is the default behavior. I want all of the ramdisk space available to a regular user for working with encrypted files.

3) Create a mount point and mount the ramdisk :
Now that you have formatted the ramdisk, you must create a mount point for it. Then you can mount your ramdisk and use it. We will use the directory /mnt/rd for this operation.

[root@test-db]# mkdir /ramdisk

[root@test-db]# mount /dev/ram0 /ramdisk

Now verify the new ramdisk mount:
[root@test-db]# df -h | grep ram0

4) Performance benchmarking :

Now that it has been created, you can copy, move, delete, edit, and list files on the ramdisk exactly as if they were on a physical disk partiton. We can do a IO benchmark by using the dd command.

In the example below the "/hdisk" is a folder on the servers hard drive and "/ramdisk" is the mounted ramdisk.

[root@test-db]# dd if=/dev/zero of=/hdisk/X1 bs=128k count=10240
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 50.6943 s, 26.5 MB/s
[root@test-db]# dd if=/dev/zero of=/ramdisk/X1 bs=128k count=10240
10240+0 records in
10240+0 records out
1342177280 bytes (1.3 GB) copied, 4.34945 s, 309 MB/s

The difference is clearly visible.

RAMDisk is also a great place to view decrypted GPG or OpenSSL files, as well as a good place to create files that will be encrypted. After your host is powered down, all traces of files created on the ramdisk are gone.

To unmount the ramdisk, simply enter the following:

[root@test-db]# umount -v /ramdisk

If you remount the ramdisk, your data will still be there. Once memory has been allocated to the ramdisk, it is flagged so that the kernel will not try to reuse the memory later. Therefore, you cannot “reclaim” the RAM after you are done with using the ramdisk. For this reason, you will want to be careful not to allocate more memory to the ramdisk than is absolutely necessary. In my case, I am allocating < 10% of the physical RAM. You will have to tailor the ramdisk size to your needs. Of course, you can always free up the space with a reboot!

Automating ramdisk creation :

If you need to create and mount a ramdisk every time your system boots, you can automate the process by adding some commands to your /etc/rc.local init script.

Here are the lines that can be added:

# Formats, mounts, and sets permissions for the 4 GB ramdisk
/sbin/mke2fs -q -m 0 /dev/ram0
/bin/mount /dev/ram0 /ramdisk
/bin/chown apache:apache /ramdisk
/bin/chmod 0750 /ramdisk

Cheers !
Harish.

Tuesday, January 10, 2012

Memory Hotplug for Linux Guests

Recently I was asked to increase the RAM in a couple of the development VM's, but this request came with a twist. We could not afford a reboot. It would waste a lot of time for the dev team to stop all the engines, start them up again after the reboot and wait for the VM to catch up and download all the relevant data from the database.

VMware forums were lacking in detail about the hot-add compatibility with client operating systems, so I realised I’d better lookup for a solution on Google and try it for my self see how it works.

The Hot add hardware feature is only supported on the VM hardware version 7. Once this was verified, I made sure the Edit Settings > Options > General Options was set to the correct OS type. This is important, as the interface will only display the Memory/CPU Hotplug options for supported OSes. In my case I was running CentOS 6.2 x86_64, so selected Red Hat Enterprise linux (64-bit).

Next, the " Memory/CPU Hotplug feature in Edit Settings > Options " should be enabled.

I found that the CentOS build I was using (2.6.32-220.el6.x86_64) recognises hot added memory automatically.

The VM was running with 4GB RAM, so I added another 4GB RAM and now it had 8GB RAM allocated to it.

When memory is hotplugged, the kernel recognizes new memory, makes new memory management tables, and makes sysfs files for new memory’s operation. If firmware supports notification of connection of new memory to OS, this phase is triggered automatically. ACPI can notify this event. If not, “probe” operation by system administration is used instead.

Within the "/sys/devices/system/memory" directory there are a number of folders all named ‘memoryX’ where X represents a unique ‘section’ of memory. How big each section is, and hence how many folders you have is dependant on your environment.

[root@vm24_dev ~]# ls -lrth /sys/devices/system/memory
total 0
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory9
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory8
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory7
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory6
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory5
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory4
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory3
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory2
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory11
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory10
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory1
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory0
--w-------. 1 root root 4.0K Jan 10 14:14 probe
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory71
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory70
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory69
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory68
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory67
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory66
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory65
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory64
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory63
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory62
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory61
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory60
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory59
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory58
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory57
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory56
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory55
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory54
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory53
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory52
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory51
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory50
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory49
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory48
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory47
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory46
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory45
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory44
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory43
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory42
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory41
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory40
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory39
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory38
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory37
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory36
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory35
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory34
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory33
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory32
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory23
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory22
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory21
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory20
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory19
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory18
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory17
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory16
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory15
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory14
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory13
drwxr-xr-x. 2 root root 0 Jan 10 14:14 memory12
-rw-r--r--. 1 root root 4.0K Jan 10 14:14 soft_offline_page
-rw-r--r--. 1 root root 4.0K Jan 10 14:14 hard_offline_page
-r--r--r--. 1 root root 4.0K Jan 10 14:14 block_size_bytes

You can check the file "/sys/devices/system/memory/block_size_bytes" to view the size of sections in bytes. Basically, the whole memory has been divided up into equal sized chunks as per the SPARSEMEM memory model.

[root@vm24_dev ~]# cat /sys/devices/system/memory/block_size_bytes
8000000

In each section’s folder there is a file called ‘state’, and in each file is one of two words; online or offline.
Locate the memoryX folder(s) which account for the hot added memory by working out the section sizes above, or (like me), just check the contents of the state files:

[root@vm24_dev ~]# cat /sys/devices/system/memory/memory39/state
online

Once you locate the offline sections, you can bring them online as follows:

[root@vm24_dev ~]#echo online > /sys/devices/system/memory/memory40/state
Validate the memory change using:

[root@vm24_dev ~]# free
total used free shared buffers cached
Mem: 8060484 262040 7798444 0 8080 60648
-/+ buffers/cache: 193312 7867172
Swap: 11300856 0 11300856

I noticed that William Lam (lamw on the VMware communities) created a nice script to automate the discovery and online process. It’s very neat and can be downloaded from : http://communities.vmware.com/docs/DOC-10492

You can also create it as follows:

[root@vm24_dev ~]# vi online_hotplug_memory.sh

Paste the following content in to the file and save it.
-------------------------------------------------------------------------
#!/bin/bash
# William Lam
# http://engineering.ucsb.edu/~duonglt/vmware/
# hot-add memory to LINUX system using vSphere ESX(i) 4.0
# 08/09/2009

if [ "$UID" -ne "0" ]
then
echo -e "You must be root to run this script.\nYou can 'sudo' to get root access"
exit 1
fi


for MEMORY in $(ls /sys/devices/system/memory/ | grep memory)
do
SPARSEMEM_DIR="/sys/devices/system/memory/${MEMORY}"
echo "Found sparsemem: \"${SPARSEMEM_DIR}\" ..."
SPARSEMEM_STATE_FILE="${SPARSEMEM_DIR}/state"
STATE=$(cat "${SPARSEMEM_STATE_FILE}" | grep -i online)
if [ "${STATE}" == "online" ]; then
echo -e "\t${MEMORY} already online"
else
echo -e "\t${MEMORY} is new memory, onlining memory ..."
echo online > "${SPARSEMEM_STATE_FILE}"
fi
done
-------------------------------------------------------------------------
[root@vm24_dev ~]# chmod +x online_hotplug_memory.sh
[root@vm24_dev ~]# ./online_hotplug_memory.sh

The out put should be as follows :

[root@vm24_dev ~]# ./online_hotplug_memory.sh
Found sparsemem: "/sys/devices/system/memory/memory0" ...
memory0 already online
Found sparsemem: "/sys/devices/system/memory/memory1" ...
memory1 already online
Found sparsemem: "/sys/devices/system/memory/memory2" ...
memory2 already online
Found sparsemem: "/sys/devices/system/memory/memory3" ...
memory3 already online
Found sparsemem: "/sys/devices/system/memory/memory4" ...
memory40 is new memory, onlining memory ...
Found sparsemem: "/sys/devices/system/memory/memory5" ...
memory41 is new memory, onlining memory ...
Found sparsemem: "/sys/devices/system/memory/memory7" ...
memory42 is new memory, onlining memory ...

That’s it! Quite simple really.

Cheers !
Harish.

Monday, January 09, 2012

Securing Oracle Database server using IPTables in Linux

Linux can help administrators create a strong firewall with the powerful, kernel-based netfilter/iptables software. As demonstrated below, iptables can create general or specific packet filters to allow or deny traffic. This enables administrators to protect their servers from a wide variety of hazards, including service attacks and hack attempts. As always, the best way to learn is to get your hands dirty and experiment with iptables on a testing machine.

This article is an example of how you cam secure a Oracle Database server using IPTables in Linux.

Edit the iptables file from the /etc/sysconfig directory:

[root]# vi /etc/sysconfig/iptables

#Nagios Server for real time alerts : 192.168.0.99
#Zabbix Server for historic perf. data : 192.168.0.98
#Trusted VLAN for SSH and SFTP traffic : 192.168.4.0
#Trusted IP's from untrusted VLAN : 192.168.16.xx

# Rule to enable PING from selected IP's
-A INPUT -p tcp -s 192.168.0.99 -j ACCEPT
-A INPUT -p tcp -s 192.168.0.98 -j ACCEPT

# Rule to enable monitoring from selected IP's
-A INPUT -m state --state NEW -m tcp -s 192.168.0.99 -p tcp --dport 5666 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -s 192.168.0.98 -p tcp --dport 10050 -j ACCEPT

# Rule to enable SSH / SFTP from Trusted VLAN
-A INPUT -m state --state NEW -m tcp -s 192.168.4.0/24 -p tcp --dport 20 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -s 192.168.4.0/24 -p tcp --dport 21 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -s 192.168.4.0/24 -p tcp --dport 22 -j ACCEPT

# Rule to enable Oracle port for IP’s of Application VM’s
-A INPUT -m state --state NEW -m tcp -s 192.168.16.20 -p tcp --dport 1521 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -s 192.168.16.21 -p tcp --dport 1521 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -s 192.168.16.22 -p tcp --dport 1521 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -s 192.168.16.23 -p tcp --dport 1521 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -s 192.168.16.24 -p tcp --dport 1521 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -s 192.168.16.25 -p tcp --dport 1521 -j ACCEPT

# Catch All Rule
-A INPUT -m state --state NEW -m tcp -p tcp -j DROP

Restart the iptables service

[root]# service iptables restart

And you are good to go !

Cheers !
Harish.

Extending Disk space on Your Linux Computer

*************************************************************************************
Extending Disk space on Your Linux Computer using LVM2.
*************************************************************************************

Prerequisite: This tutorial covers adding disk space to your linux computer. First it is assumed that the new hard drive was physically added to your system.

As root perform the following:

[root]# fdisk /dev/sdb
Command (m for help): m (Enter the letter "m" to get list of commands)
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)

Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-2654, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-2654, default 2654):
Using default value 2654

Command (m for help): p

Disk /dev/sdb: 240 heads, 63 sectors, 2654 cylinders
Units = cylinders of 15120 * 512 bytes

Device Boot Start End Blocks Id System
/dev/sdb1 1 2654 20064208+ 5 Linux

Command (m for help): w (Write and save partition table)

Format the new volume using the mkfs command:

[root]# mkfs -t ext3 /dev/sdb1
mke2fs 1.27 (8-Mar-2002)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
2508352 inodes, 5016052 blocks
250802 blocks (5.00%) reserved for the super user
First data block=0
154 block groups
32768 blocks per group, 32768 fragments per group
16288 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 34 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.


The pvdisplay allows you to see the attributes of one or more physical volumes like size, physical extent size, space used for the volume group descriptor area and so on.

[root]# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name vg_lvm
PV Size 19.51 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 4994
Free PE 0
Allocated PE 4994
PV UUID uZOPl7-Jg0a-QAbV-K3NU-402p-Ia7i-eUTe81


Adding physical volumes to a volume group
Use 'vgextend' to add an initialized physical volume to an existing volume group (vg_lvm).

[root]# vgextend vg_lvm /dev/sdb1

[root]# pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name vg_lvm
PV Size 19.51 GiB / not usable 3.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 4994
Free PE 0
Allocated PE 4994
PV UUID uZOPl7-Jg0a-QAbV-K3NU-402p-Ia7i-eUTe81

--- Physical volume ---
PV Name /dev/sdb1
VG Name vg_lvm
PV Size 19.99 GiB / not usable 1.43 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 5118
Free PE 126
Allocated PE 4992
PV UUID up8sB5-3lSg-Elwc-xoLj-xn3N-cqc2-KoQ6cY

Extending a logical volume

To extend a logical volume you simply tell the lvextend command how much you want to increase the size. You can specify how much to grow the volume, or how large you want it to grow to:

[root]# lvextend -L20G /dev/mapper/vg_lvm-lv_root
lvextend -- extending logical volume "/dev/mapper/vg_lvm-lv_root" to 20 GB
lvextend -- doing automatic backup of volume group "vg_lvm"
lvextend -- logical volume "/dev/mapper/vg_lvm-lv_root" successfully extended

This will extend /dev/mapper/vg_lvm-lv_root to 20 Gigabytes.

[root]# lvextend -L+13G /dev/mapper/vg_lvm-lv_root
lvextend -- extending logical volume "/dev/mapper/vg_lvm-lv_root" to 33 GB
lvextend -- doing automatic backup of volume group "vg_lvm"
lvextend -- logical volume "/dev/mapper/vg_lvm-lv_root" successfully extended

will add another 13 GB to /dev/mapper/vg_lvm-lv_root.

After you have extended the logical volume it is necessary to increase the file system size to match. how you do this depends on the file system you are using.

By default, most file system resizing tools will increase the size of the file system to be the size of the underlying logical volume so you don't need to worry about specifying the same size for each of the two commands.

Unless you have patched your kernel with the ext2online patch it is necessary to unmount the file system before resizing it. (It seems that the online resizing patch is rather dangerous, so use at your own risk)

[root]# resize2fs /dev/mapper/vg_lvm-lv_root

[root]# lvdisplay
--- Logical volume ---
LV Name /dev/vg_lvm/lv_root
VG Name vg_lvm
LV UUID ZzRWTt-M8QA-Awke-2t6c-Mf7i-0CUe-Xe2O4H
LV Write Access read/write
LV Status available
# open 1
LV Size 33.13 GiB
Current LE 8482
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0

*************************************************************************************
Extending Swap on an LVM2 Logical Volume
*************************************************************************************

To extend an LVM2 swap logical volume (assuming /dev/vg_lvm/lv_swap is the volume you want to extend) from 6GB to 11 GB:

Disable swapping for the associated logical volume:
[root]# swapoff -v /dev/vg_lvm/lv_swap

Resize the LVM2 logical volume by 5 GB:
[root]# lvm lvresize /dev/vg_lvm/lv_swap -L +5G

Format the new swap space:
[root]# mkswap /dev/vg_lvm/lv_swap

Enable the extended logical volume:
[root]# swapon -va

Test that the logical volume has been extended properly:
[root]# free
total used free shared buffers cached
Mem: 3924924 229652 3695272 0 9372 67056
-/+ buffers/cache: 153224 3771700
Swap: 11300856 0 11300856

[root]# lvdisplay /dev/vg_lvm/lv_swap
--- Logical volume ---
LV Name /dev/vg_lvm/lv_swap
VG Name vg_lvm
LV UUID ewhNre-qYYE-iY0W-HtQE-m5jk-x7s4-f8UPUb
LV Write Access read/write
LV Status available
# open 1
LV Size 10.78 GiB
Current LE 2759
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1

Thursday, December 29, 2011

Error 500 when setting up VSFTP.

Trying to set up vsftpd on CentOS 6.2 ?

When trying to ftp from a client machine to the VSFTP Server. You will be prompted for user id and password but then will get the following error:

"500 OOPS: cannot change directory:/home/testuser"
Login failed.

"testuser" is my user id on the CentOS Server.

I've gotten it to work by disabling SELinux but I had to find a less drastic solution.

There are a lot of recommendations floating around the net for this, but try the following first:

[root@testvm vsftpd]# getenforce
Enforcing
[root@testvm vsftpd]# getsebool -a | grep ftp
allow_ftpd_anon_write –> off
allow_ftpd_full_access –> off
allow_ftpd_use_cifs –> off
allow_ftpd_use_nfs –> off
allow_tftp_anon_write –> off
ftp_home_dir –> on (change that to on in ur case this option is off)
ftpd_disable_trans –> off
ftpd_is_daemon –> on
httpd_enable_ftp_server –> off
tftpd_disable_trans –> off
[root@testvm vsftpd]# setseboll -P ftp_home_dir on

This is all that you need to do.

Monday, February 28, 2011

Step-by-step instructions for setting up Netapp (Data OnTap) Simulator

Those who are newly learning Netapp can use Netapp Data OnTap Simulator to get comfortable with Netapp commands. This tool gives you the experience of administering and using a NetApp storage system with all the features of Data ONTAP. The Simulator can be downloaded from http://now.netapp.com/NOW/cgi-bin/simulator ( you need NOW access ). The simulator has fully functional license keys for all Netapp functionalities.

The simulator can be loaded onto a Red Hat or SuSE Linux box and looks and feels exactly like Data ONTAP. Almost anything you can do with Data ONTAP can be done with the simulator. Without purchasing new hardware or impacting your production environment, you can test functionality, export NFS and CIFS shares etc.

System Requirement:

Data ONTAP 7G (7.x.x) simulators

Server /PC with Single network card, 128 MB main memory minimum (512 MB recommended), 250MB free hard disk space (minimum) disk space of 5GB would be better for simple testing purpose. More disks you need then you need have ~30GB
Linux installed, running, and networked (Works on Red Hat Linux 7.1 through 9.0, SUSE 8.1 and 8.2) any Linux Operating System (32 bit)
Installer must be logged on as root

Limitations:

This is not a production version of Data ONTAP and should not be used in your production environment. There are inefficiencies (for example, a 1GB disk file will be much larger than 1GB) and performance running on another OS without a disk system behind it will obviously be considerably less than with Data ONTAP. Simulator can’t hold disks more than 28 and approximately around 28GB in total size. Finally, the simulator can't emulate environments where specific hardware is required (for example, Fibre Channel).It is recommended that the Data ONTAP Simulator be installed on a non-production Linux system. Simulator installation scripts may replace the Red Hat libc library with an older more stable one. It's unlikely but possible that other applications may be affected.

Steps to install Simulator:

Step I:
====
o Download the Data ONTAP simulator and keep it under home directory

linux-sesl-184-54:/home # ls
7.3.1-tarfile-v22.tar

Step II:
=====
o Now untar the simulator installer.

linux-sesl-184-54:/home # tar -xvf 7.3.1-tarfile-v22.tar

Step III :
=====
o Once you have untarred the installer you will find the new folder simulator where the installer get extracted.

linux-sesl-184-54:/home # ls
7.3.1-tarfile-v22.tar simulator ===========================> Extracted under a folder called simulator

Step IV:
=====
o Change Driectory to the extracted path

linux-sesl-184-54:/home # cd simulator/
linux-sesl-184-54:/home/simulator # ls
Vmware, Linux and Simulator installation.doc disks.tgz disks2.tgz doc license.htm readme.htm runsim.sh setup.sh sim.tgz

Step V:
=====
o Now run the installer script (setup.sh) to create the Single Node Simulator. If you wish to install Cluster Pair skip this step and perform Step VII.

linux-sesl-184-54:/home/simulator # ./setup.sh
Script version 22 (18/Sep/2007)
Where to install to? [/sim]: =====================> Choose your simulator install path.
Would you like to install as a cluster? [no]:
Would you like full HTML/PDF FilerView documentation to be installed [yes]:

Continue with installation? [no]: yes ===================================================> Enter "yes" to contiue the installtion

Creating /sim
Unpacking sim.tgz to /sim
Configured the simulators mac address to be [00:50:56:1:cd:eb]
Please ensure the simulator is not running.
Your simulator has 3 disk(s). How many more would you like to add? [0]: 2
Too high. Must be between 0 and 25.
Your simulator has 3 disk(s). How many more would you like to add? [0]: 2 =====================> Maximum available disk numbers for simulator (Choose the number of disks and size based on your Linux disk space)

The following disk types are available in MB:
Real (Usable)
a - 43 ( 14)
b - 62 ( 30)
c - 78 ( 45)
d - 129 ( 90)
e - 535 (450)
f - 1024 (900)
If you are unsure choose the default option a

What disk size would you like to use? [a]: f ===========================================> Choose the bigger disk size based on your need and the disk space availability
Disk adapter to put disks on? [0]:
Use DHCP on first boot? [yes]: no ===================================================> Say "no" if you wanted to configure Static IP address
Ask for floppy boot? [no]:
Your default simulator network interface is already configured to eth0.
Which network interface should the simulator use? [eth0]: ==============================> Choose the "interface" which you wanted to use for Data Traffic

Another simulator is running. Cannot give good advise about memory.
How much memory would you like the simulator to use? [512]: =============================> Choose the Default RAM size
Create a new log for each session? [no]:
Overwrite the single log each time? [yes]:
Adding 25 additional disk(s).
Complete. Run /sim/runsim.sh to start the simulator.
linux-sesl-184-54:/home/simulator #

Step VI:
======
o That's it, start the simulator by running startup script /sim/runsim.sh. And configure the Setup as per your need.

Step VII:
=====

Network Appliance Clustered Failover delivers a robust and highly available data service for business-critical environments. Installed on a pair of NetApp filers, NetApp Clustered Failover ensures data availability by transferring the data service of an unavailable filer to the other filer in the cluster. Data ONTAP simulator also supports the Clustered Failover.

o To configure the Data ONTAP Simulator for the (cluster) Active Active Pair do the following:

CFO Step I:

Run the Setup and when it ask for the following say yes and continue the setup
Would you like to install as a cluster? [no]: yes ====================================> Say yes to install the Active Active Pair (Cluster) Node

CFO Step II: Now you will find node1 & node2 simulators installed in the given path.

CFO Step III: Run the setup script for each node and configure the interface which needs to take over a partner IP address during failover.
Please enter the new hostname []: cfo1
Do you want to configure virtual network interfaces? [n]:
Please enter the IP address for Network Interface ns0 []: 1.1.1.1 ==================> Primary IP address of node1
Please enter the netmask for Network Interface ns0 [255.0.0.0]:
Should interface ns0 take over a partner IP address during failover? [n]: y ============> Say "Y" to enable Cluster Failover
The clustered failover software is not yet licensed. To enable network failover, you should run the 'license' command for clustered failover.
Please enter the IP address or interface name to be taken over by ns0 []: 1.1.1.2=======> Partner IP address of node2

CFO Step IV:Add cluster license.After reboot (mandatory since cluster is licensed) just enable cluster from the CLI.

CFO Step V: Check the status via cf status command. It should say Cluster Failover enabled.


Bringing the Virtual Filer Up
# cd /sim

#/sim/runsim.sh
runsim.sh script version Script version 22 (18/Sep/2007)
This session is logged in /netapp/7.3/sessionlogs/log
NetApp Release 7.3: Thu Jul 24 12:55:28 PDT 2008
Copyright (c) 1992-2008 Network Appliance, Inc.
Starting boot on Tue Dec 9 11:45:37 GMT 2008
Tue Dec 9 11:45:42 GMT [fmmb.current.lock.disk:info]: Disk v4.16 is a local HA mailbox disk.
Tue Dec 9 11:45:42 GMT [fmmb.instStat.change:info]: normal mailbox instance on local side.
Tue Dec 9 11:45:43 GMT [raid.cksum.replay.summary:info]: Replayed 0 checksum blocks.
Tue Dec 9 11:45:43 GMT [raid.stripe.replay.summary:info]: Replayed 0 stripes.
…. Boot message
Please enter the new hostname []: - Specify Filer hostname
Do you want to configure virtual network interfaces? [n]:n
Please enter the IP address for Network Interface ns0 []: -- Provide Filer ip
Please enter the netmask for Network Interface ns0 [255.255.0.0]: -- Provide Netmask
Please enter media type for ns0 {100tx-fd, auto} [auto]:
Please enter the IP address for Network Interface ns1 []:
Would you like to continue setup through the web interface? [n]:n
Please enter the name or IP address of the default gateway: -- Provide default gateway
The administration host is given root access to the filer's
/etc files for system administration. To allow /etc root access
to all NFS clients enter RETURN below.
Please enter the name or IP address of the administration host: -- Provide admin hostname
Please enter the IP address for adminserver : -- Provide admin ip
Please enter timezone [GMT]:Asia/Calcutta
Where is the filer located? []:Mumbai
What language will be used for multi-protocol files (Type ? for list):en_US
Setting language on volume vol0
The new language mappings will be available after reboot
Tue Dec 9 11:47:03 GMT [vol.language.changed:info]: Language on volume vol0 changed to en_US
Language set on volume vol0
Do you want to run DNS resolver? [n]: -- Say yes if you want configure dns
Do you want to run NIS client? [n]: y
Please enter NIS domain name []: - Provide nis domain name
Please enter list of preferred NIS servers [*]: - Prodive nis server ip's
Setting the administrative (root) password for [hostname]
New password: - Set root password here
Retype new password:
This process will enable CIFS access to the filer from a Windows(R) system.
Use "?" for help at any prompt and Ctrl-C to exit without committing changes.
Your filer does not have WINS configured and is visible only to
clients on the same subnet.
Do you want to make the system visible via WINS? [n]: n -- Say yes if you want to configure WINS
A filer can be configured for multiprotocol access, or as an NTFS-only
filer. Since multiple protocols are currently licensed on this filer,
we recommend that you configure this filer as a multiprotocol filer
(1) Multiprotocol filer
(2) NTFS-only filer
Selection (1-2)? [1]: 1
CIFS requires local /etc/passwd and /etc/group files. NIS services,
which normally take the place of the local /etc files, are enabled on
this filer. However, if NIS is ever unavailable, it may be useful to
have a rudimentary /etc/passwd and /etc/group file for CIFS
authentication. This default passwd file would contain 'root',
'pcuser', and 'nobody'.
Should CIFS create default /etc/passwd and /etc/group files? [n]:
NIS is currently enabled but NIS group caching is disabled. This may
have a severe impact on CIFS authentication if the NIS servers are
slow to respond or unavailable. It is highly recommended that you
enable NIS group caching.
Would you like to enable NIS group caching? [y]:
By default, the NIS group cache is updated once a day at midnight. If
you would like to update the cache more often or at a different time,
specify a list of hours (1-24, representing the hours in a day) that
describe when the update should be performed.
Enter the hour(s) when NIS should update the group cache [24 ]:
Would you like to specify additional hours? [n]:
The default name for this CIFS server is 'FILERNAME'.
Would you like to change this name? [n]:
Data ONTAP CIFS services support four styles of user authentication.
Choose the one from the list below that best suits your situation.
(1) Active Directory domain authentication (Active Directory domains only)
(2) Windows NT 4 domain authentication (Windows NT or Active Directory domains)
(3) Windows Workgroup authentication using the filer's local user accounts
(4) /etc/passwd and/or NIS/LDAP authentication
Selection (1-4)? [1]: 4
What is the name of the Workgroup? [WORKGROUP]:
Tue Dec 9 11:48:34 GMT [rc:info]: NIS: Group Caching has been enabled
CIFS - Starting SMB protocol...
Tue Dec 9 11:48:34 GMT [nis.lclGrp.updateSuccess:info]: The local NIS group update was successful.
Welcome to the WORKGROUP Windows(R) workgroup
CIFS local server is running.
Password:
filername> -- Filer is up

******************************************************************

Perform Filer related activities from admin host via rsh or from the command prompt in the end of previous step

filername> df

Filesystem kbytes used avail capacity Mounted on
/vol/vol0/ 164552 71264 93288 43% /vol/vol0/
/vol/vol0/.snapshot 0 0 0 ---% /vol/vol0/.snapshot

filername> vol status -r

Aggregate aggr0 (online, raid0) (zoned checksums)
Plex /aggr0/plex0 (online, normal, active)
RAID group /aggr0/plex0/rg0 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
data v4.16 v4 1 0 FC:B - FCAL N/A 120/246784 127/261248
data v4.17 v4 1 1 FC:B - FCAL N/A 120/246784 127/261248

Spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for zoned checksum traditional volumes or aggregates only
spare v4.18 v4 1 2 FC:B - FCAL N/A 36/74752 43/89216
spare v4.19 v4 1 3 FC:B - FCAL N/A 36/74752 43/89216
spare v4.20 v4 1 4 FC:B - FCAL N/A 36/74752 43/89216
spare v4.21 v4 1 5 FC:B - FCAL N/A 36/74752 43/89216
spare v4.22 v4 1 6 FC:B - FCAL N/A 36/74752 43/89216
spare v4.24 v4 1 8 FC:B - FCAL N/A 36/74752 43/89216
spare v4.25 v4 1 9 FC:B - FCAL N/A 36/74752 43/89216
spare v4.26 v4 1 10 FC:B - FCAL N/A 36/74752 43/89216
spare v4.27 v4 1 11 FC:B - FCAL N/A 36/74752 43/89216
spare v4.28 v4 1 12 FC:B - FCAL N/A 36/74752 43/89216


NetApp ONTAP Simulator and ESXi 4.1 Server

After installing and configuring the simulator if you can't get any network connectivity whatsoever. Try the following steps :

The network interface that was being used by the simulator has to be in promiscuous mode. ESXi Server, by default, doesn’t allow NICs in guest operating systems to be in promiscuous mode.

The fix is this:

Enable “Promiscuous Mode” for the vSwitch Port Group where the GREEN NIC of the Endian resides on.

In the ESXi configuration,
- Select your ESXi server in the tree view on the left
- Select the “Configuration” tab
- Find the “Virtual Switch” where the vnic of your VM connects to
- Click on the “Properties” link for that Virtual Switch
- Select the “Virtual Machine Port Group”
- Click “Edit”
- Go to the “Security” tab
- Put a checkmark after the “Promiscuous Mode”, then set the value in the combobox to “Accept”
- Press the “OK” button in the “Virtual Machine Port Group” dialog
- Press the “Close” button in the “Virtual Switch” dialog



Why enable Promiscuous Mode?
A router or bridge does more with traffic than a normal NIC. So the router needs to see more packets, Promiscuous mode enables that.