Sunday, December 30, 2012

IPMP on solaris 11



With the introduction of Solaris 11 the network configuration are managed by network configuration policies (NCPs), there are two policies which can be implemented Fixed which are statically implemented and reactive which are dynamically implemented. The network configuration commands that are used are ipadm and dladm which offers lot of new feature like Link aggregation, vlan tagging, ip tunneling, bridging and IPMP, also the setps involved in configuring them are less and which makes it less burden on system administrators.
Just wanted to touch base on configuring few of these feature.

******
IPMP
******

IPMP (IP multipathing) is a grouping of the network interface into single logical interface. The IPMP can be configured in two types of failure detection Link based (layer 2) & probe based (layer 3)
The IPMP feature enable us to achieve the distribution of data address (active-active) & transparent access failover (active-passive). These feature enable us for achieve high availability on our network interface when there is a failure or failover the interface when we are doing any maintenance.


A. Link-Based IPMP

===========
Active/Active
===========

1. Check if the network automatic configuration is disbaled and the default is enabled.

root@suntest:~# netadm list
TYPE        PROFILE        STATE
ncp         Automatic      disabled
ncp         DefaultFixed   online

2. list all the physical interfaces available.

root@suntest:~# dladm show-phys
LINK              MEDIA                STATE      SPEED  DUPLEX    DEVICE
net1              Ethernet             up         1000   full      e1000g1
net0              Ethernet             up         1000   full      e1000g0

3. Add the ip in the /etc/hosts so that it remains persistent across the reboot

root@suntest:~# echo "192.168.75.25 testipmp0" >> /etc/hosts

4. Create the ipmp group & add the interface to the group

root@suntest:~# ipadm create-ipmp ipmp0
root@suntest:~# ipadm create-ip net0
root@suntest:~# ipadm create-ip net1
root@suntest:~# ipadm add-ipmp -i net0 -i net1 ipmp0

5. Assign the ip address for the ipmp interface which is configured.

root@suntest:~# ipadm create-addr -T static -a 192.168.75.21/24 ipmp0/v4

6. Review the configurtion.

root@suntest:~# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp0       ipmp0       ok        --        net1 net0

root@suntest:~# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
suntest                   up     ipmp0       net1        net1 net0

root@suntest:~# ipmpstat -i
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
net0        yes     ipmp0       -------   up        disabled  ok
net1        yes     ipmp0       --mbM--   up        disabled  ok

As you can see both the network interfaces are active here and part of ipmp group

===============
Active/Passive
===============

For configuring the Active/Passive link based ipmp group perform the same step from 1 to 4 and then

5. Make one of the interface as standby from the ipmp group and then assign the ip

root@suntest:~# ipadm set-ifprop -p standby=on -m ip net1
root@suntest:~# ipadm create-addr -T static -a 192.168.75.21/24 ipmp0/v4

6. Review the configurtion.

root@suntest:~# ipmpstat -g
GROUP       GROUPNAME   STATE     FDT       INTERFACES
ipmp0       ipmp0       ok        --        net0 (net1)

root@suntest:~# ipmpstat -a
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
suntest                   up     ipmp0       net0        net0

root@suntest:~# ipmpstat -in
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
net1        no      ipmp0       is-----   up        disabled  ok
net0        yes     ipmp0       --mbM--   up        disabled  ok


B. Probe-base IPMP

The probe-based IPMP that is supported in Solaris 11 are two kind with Test ip on interfaces & Transitive probing.

*** Configuring Probe-based with Test addresses (Active/Active) ***

1. Confirm if the transitive probling is not started, which by default is set to test address

root@suntest:~# svccfg -s svc:/network/ipmp listprop config/transitive-probing
config/transitive-probing boolean     false

2. Create the ipmp group and add the interface

root@suntest:~# ipadm create-ipmp ipmp0
root@suntest:~# ipadm create-ip net0
root@suntest:~# ipadm create-ip net1
root@suntest:~# ipadm add-ipmp -i net0 -i net1 ipmp0

3. Assign the ip for the ipmp interface and test address for the physical interfaces/

root@suntest:~# ipadm create-addr -T static -a 192.168.75.21/24 ipmp0/v4
root@suntest:~# ipadm create-addr -T static -a 192.168.75.2/24 net0/test1
root@suntest:~# ipadm create-addr -T static -a 192.168.75.3/24 net1/test2

4. Add a target to which the interfaces will probe.

root@suntest:~# route add default 192.168.75.1

5. Review if everything is working as you wanted.

root@suntest:~# ipmpstat -an
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
192.168.75.21             up     ipmp0       net0        net1 net0

root@suntest:~# ipmpstat -tn
INTERFACE   MODE       TESTADDR            TARGETS
net1        routes     192.168.75.3        192.168.75.1
net0        routes     192.168.75.2        192.168.75.1

root@suntest:~# ipmpstat -in
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
net1        yes     ipmp0       -------   up        ok        ok
net0        yes     ipmp0       --mbM--   up        ok        ok


For configuring Probe-based with Test adresses (Active/passive) just do one step after 3

root@suntest:~# ipadm set-ifprop -p standby=on -m ip net1

and the difference you will see as below

root@suntest:~# ipmpstat -in
INTERFACE   ACTIVE  GROUP       FLAGS     LINK      PROBE     STATE
net1        no      ipmp0       is-----   up        ok        ok
net0        yes     ipmp0       --mbM--   up        ok        ok

root@suntest:~# ipmpstat -an
ADDRESS                   STATE  GROUP       INBOUND     OUTBOUND
192.168.75.21             up     ipmp0       net0        net0


*** Configuring Transtive probing using Active/Active method ***

1. Enable the transitive probing

root@suntest:~# svccfg -s svc:/network/ipmp setprop config/transitive-probing=true
root@suntest:~# svcadm refresh ipmp

2. Create the ipmp group and add the interface

root@suntest:~# ipadm create-ipmp ipmp0
root@suntest:~# ipadm create-ip net0
root@suntest:~# ipadm create-ip net1
root@suntest:~# ipadm add-ipmp -i net0 -i net1 ipmp0

3. Assign the ip for the ipmp interface

root@suntest:~# ipadm create-addr -T static -a 192.168.75.21/24 ipmp0/v4

4. Review the configuration

root@sunclu1:~# ipmpstat -tn
INTERFACE   MODE       TESTADDR            TARGETS
net1        transitive <net1>              <net0>
net0        routes     192.168.75.21       192.168.75.1

root@sunclu1:~# ipmpstat -pn
TIME      INTERFACE   PROBE  NETRTT    RTT       RTTAVG    TARGET
0.77s     net1        t247   1.92ms    1.93ms    1.70ms    <net0>
0.77s     net0        i244   0.86ms    1.15ms    1.09ms    192.168.75.1
1.88s     net1        t248   1.68ms    1.69ms    1.70ms    <net0>
1.88s     net0        i245   0.73ms    1.21ms    1.11ms    192.168.75.1


Hope this document was helpful...


Thursday, December 13, 2012

Types of disk used in Storage





The disks in today’s storage environment are seeing a rapid change with the new type of disks available which produce more throughput & which are smaller in size that helps to put more disks into the disk array also the disk capacity are higher than its predecessor.
Let’s just dig in about this disks which are available in Storage platforms and understand what the technology that they are built on are, before that lets just touch upon those basic things that are important to consider when you think about disk.

A disk device has physical components and logical components. The physical components include disk platters and read/write heads. The logical components include disk slices, cylinders, tracks, and sectors.

All the hard disk drives are composed of same physical features, however quality of the parts inside the hard drive affects its performance, there are three important things that work together to give us the performance we want they are:
  1.      Disk Platter
The disk platter are made of aluminum or glass substrate it is then coated with magnetic surface which actually enables us to store the data in magnetic bits. The platters in a drive are separated by disk spacers and are clamped to a rotating spindle that turns all the platters in unison, there is a motor which is mounted right below the spindle which spins the platter at constant rate which is nothing but the RPM of the disk.
  2.       Drive Heads
The disk drive heads read and writes the data on these magnetic bits on the platter, there are usually two heads per platter which are sited on either side of the disk.
  3.       Actuator Arm
All the heads are attached to a single head actuator or actuator arm that moves the heads around the platters
As you can see that the hard disk are built by electro-mechanical components which has its performance limitation, which leads to the discovery of the disk that are built on flash based technology

                                                                *******
                                                                   SSD
                                                                *******

Solid state disk or SSD is the fastest of the disk available, it is basically built using the NAND-based memory as flash drive which don’t have any spinning parts of electromechanical disks which lead it to provide fast performance and low latency. Some new solid- state technology systems are designed for solid state as primary storage while using spinning disk as less expensive storage for less active data.
Enterprise flash drives are solid-state drives (SSD’s) that have been modified in order to meet the reliability required in an enterprise storage array and are widely used as the top tier in the automated storage tiering feature on the latest storage boxes

                                                                  ******
                                                                     FC
                                                                  ******

Fibre Channel is a hard disk drive interface technology designed primarily for high-speed, high volume data transfers in storage. Fibre Channel standards specify the physical characteristics of the cables, connectors, and signals that link devices.
Fibre Channel provides three topology options for connecting devices: point-to-point, arbitrated loop, and fabric (sometimes called “switched” or “switched fabric”).
With using Fibre Channel FC-AL loop we get a speed of 2, 4 to 6Gbps speed which are scalable (we can start with basic array and extend the loop as we needed), reliable (superior data encoding & error checking) which are built for mission critical environments

                                                                   ******
                                                                      SAS
                                                                   ******
SAS is the logical evolution of SCSI, including its long-established software advantage and its multi-channel, dual-port connection interface for enterprise storage which provides new levels of breakthrough speed and connectivity while retaining the functionality and reliability that is making SAS disks a good alternative for the FC disk in enterprise storage platform.
SAS disk are available in 10k and 15k RPM speed, the SCSI error-reporting and error-recovery commands on SAS are more functional than those on SATA drives

                                                                      ******
                                                                       SATA
                                                                      ******

SATA technology was developed directly to replace the legacy desktop parallel ATA (PATA) interface. The SATA interface is designed to meet the requirements of Entry to Enterprise-level storage deployments. SATA I provides a point-to-point data transfer of 1.5gbps but with the new SATA II disk we get a data transfer rate of 3 GBPS, this disk rotates at 5400, 7200 and 10k RPM
SATA disk offer better capacity then other disk types they may not give you high speed performance as SAS/FC/SSD disks but are of great use in environment where you have shared filesystem like NFS/CIFS/SMB or in low cost environments.
SATA drives use native command queuing, while SAS drives use tagged command queuing


Wednesday, December 5, 2012

Installing powerpath and changing the emc pseudo device name


EMC powerpath is one of the best multipathing software that is used on the servers and not only it is stable but it is very easy to use, today i would like to show how to install it and to change the pseudo device name, sometimes when you are working on server where the luns are shared you want the device name to be same so that you can find them easily and sometime make your troubleshooting easier.

First thing you need is a powerlink.emc.com account to download the EMC powerpath which is supported for your Operating system

[root@linux01 ~]# rpm -ivh EMC/EMCPower.LINUX-5.6.0.00.00-143.RHEL5.x86_64.rpm
   Preparing...                         ########################################### [100%]
  1:EMCpower.LINUX         ########################################### [100%]
All trademarks used herein are the property of their respective owners.
NOTE:License registration is not required to manage the CLARiiON AX series array.

After installing the powerpath package next thing you need to license the software to use the feature of it.

[root@linux01 ~]# emcpreg -list
There are no license keys now registered.

[root@linux01 ~]# emcpreg -add XXXX-XXXX-XXXX-XXXX-XXXX-XXXX
1 key(s) successfully added.

[root@linux01 ~]# emcpreg -list
Key XXXX-XXXX-XXXX-XXXX-XXXX-XXXX
  Product: PowerPath
  Capabilities: All

Start the service and then if you have done the zoning then check for any new luns which are visible on the host HBA's
[root@linux01 ~]# /etc/init.d/PowerPath start
Starting PowerPath:  done

[root@linux01 ~]# powermt check

[root@linux01 ~]# powermt display dev=all
Pseudo name=emcpowerb
CLARiiON ID=CKM00103100530 [Test-SG]
Logical device ID=600601604D3827004E67486CA534E211 [LUN 1]
state=alive; policy=CLAROpt; priority=0; queued-IOs=0;
Owner: default=SP A, current=SP A       Array failover mode: 1
==============================================================================
--------------- Host ---------------   - Stor -   -- I/O Path --  -- Stats ---
###  HW Path               I/O Paths    Interf.   Mode    State   Q-IOs Errors
==============================================================================
   2 qla2xxx                  sdc       SP A1     active  alive       0      0
   2 qla2xxx                  sdf       SP B0     active  alive       0      0
   3 qla2xxx                  sdh       SP A0     active  alive       0      0
   3 qla2xxx                  sdj       SP B1     active  alive       0      0


At time you want the luns which are shared across the servers to have the same pseudo name so that you can recognize them in case you want to increase the size or sometime for troubleshooting.

[root@linux01 ~]# emcpadm getusedpseudos
PowerPath pseudo device names in use:
Pseudo Device Name      Major# Minor#
        emcpowera         120      0
        emcpowerb         120     16

After finding the all the pseudo devices that you are using find the next available pseudo device which you can use.

[root@linux01 ~]# emcpadm getfreepseudos
Next free pseudo device name(s) from emcpowera are:
Pseudo Device Name      Major# Minor#
        emcpowerc         120     32

Now lets rename the device and check if its reflecting
[root@linux01 ~]# emcpadm renamepseudo -s emcpowera -t emcpowerc

[root@linux01 ~]# emcpadm getusedpseudos
PowerPath pseudo device names in use:

Pseudo Device Name      Major# Minor#
        emcpowerc         120      0
        emcpowerb         120     16

Sunday, December 2, 2012

Redhat Linux Cluster on RHEL 6.3

Today i am posting a two node Redhat Active-Passive failover Cluster on RHEL 6.3 that i got to setup and it works like a charm. I was really impressed with the way Redhat Cluster suite works and ease of installation.


There are two critical part for setting up a cluster both are relatively important

====================================
Part 1 - Steps to do before starting the installation:
====================================

Make sure that you have all the Redhat Cluster suite package is installed.

Add the host entry on the /etc/hosts on both the nodes and set a passwordless ssh that helps for copying the files
[root@node01 ~]#cat /etc/hosts
192.168.10.2    node01.test.com        node01
192.168.10.3    node02.test.com        node02

You will have to configure the interconnect in such a way that they communicate in multicast frames from eachother and please run the below command simultaneously.

[root@node01 ~]#omping 192.168.10.3 192.168.10.2
192.168.10.3 : joined (S,G) = (*, 232.43.211.234), pinging
192.168.10.3 :   unicast, seq=1, size=69 bytes, dist=0, time=0.131ms
192.168.10.3 : multicast, seq=1, size=69 bytes, dist=0, time=0.174ms

[root@node02 ~]# omping 192.168.10.2 192.168.10.3
192.168.10.2 : waiting for response msg
192.168.10.2 : joined (S,G) = (*, 232.43.211.234), pinging
192.168.10.2 :   unicast, seq=1, size=69 bytes, dist=0, time=0.224ms
192.168.10.2 : multicast, seq=1, size=69 bytes, dist=0, time=0.269ms

Make sure the below services are switched off on both the nodes, so that you can avoid troubleshooting to where was the problem.

[root@node01 ~]#chkconfig ip6tables off
[root@node01 ~]#chkconfig iptables off
[root@node01 ~]#chkconfig acpid off

and the selinux should be disabled on both the nodes
[root@node01 ~]# cat /etc/selinux/config |grep -i ^SELINUX=
SELINUX=disabled

Add the entry in /etc/httpd/conf/httpd.conf (node01, node02)
Listen 192.168.10.10:80

Add the cluster service to start it on the runlevel during the startup
[root@node01 ~]#chkconfig ricci on
[root@node01 ~]#chkconfig cman on
[root@node01 ~]#chkconfig rgmanager on

Set a password for the ricci user
[root@node01 ~]#passwd ricci

Reboot both the nodes to make the changes take effect.

========================
Part 2 - Starting with Installation:
========================
I have connected both the nodes to a EMC clariion and mapped two luns of which one of them i will be using for fencing.

[root@node01 ~]# powermt display dev=all |grep emcpower
Pseudo name=emcpowerb
Pseudo name=emcpowera

Start the ricci service before you start to configure the cluster.
[root@node01 ~]#service ricci start
Starting ricci: [ OK ]

*Create a basic cluster config file and name your cluster some name
[root@node01 ~]#ccs -h node01 --createcluster webcluster

* Add both the nodes on the cluster
[root@node01 ~]#ccs -h node01 --addnode node01 --votes=1 --nodeid=1
[root@node01 ~]#ccs -h node01 --addnode node02 --votes=1 --nodeid=2

* Here we will set the fence daemon properties which will let the cluster waits 0 sec before fencing the other node and the other parameter will set the cluster waits for 30 sec before fencing a node after it joins back
[root@node01 ~]#ccs -h node01 --setfencedaemon post_fail_delay=0 post_join_delay=30

* We will set the cman daemon properties for two node cluster and a vote so that the services will run in one node if the other fails
[root@node01 ~]#ccs -h node01 --setcman two_node=1 expected_votes=1

* Add a fence method which will be used by the nodes
[root@node01 ~]#ccs -h node01 --addmethod scsi node01
[root@node01 ~]#ccs -h node01 --addmethod scsi node02

* Adding the fence device, cluster agent & the log file which helps for troubleshooting
[root@node01 ~]#ccs -h node01 --addfencedev scsi_dev agent=fence_scsi devices=/dev/emcpowera logfile=/var/log/cluster/fence_scsi.log aptpl=1

* Add a fence inst which the cluster uses during the starting up
[root@node01 ~]#ccs -h node01 --addfenceinst scsi_dev  node01 scsi key=1
[root@node01 ~]#ccs -h node01 --addfenceinst scsi_dev  node02 scsi key=2

* Add a fence inst which the cluster uses during the stopping
[root@node01 ~]#ccs -h node01 --addunfenceinst scsi_dev  node01  key=1 action=on
[root@node01 ~]#ccs -h node01 --addunfenceinst scsi_dev  node02  key=1 action=on

* Create a failover domain and add the nodes in that, with setting it to failover in ordered way by setting priorities for each node
& nofailback if the node with higher priority comes back online after failover
[root@node01 ~]#ccs -h node01 --addfailoverdomain web-failover ordered=1 nofailback=1
[root@node01 ~]#ccs -h node01 --addfailoverdomainnode web-failover node01 1
[root@node01 ~]#ccs -h node01 --addfailoverdomainnode web-failover node02 2

* Create a services under the cluster
[root@node01 ~]#ccs -h node01 --addservice web domain=web-failover recovery=relocate autostart=1

* Now we will add all the resources in the Global cluster config which the services will use when its starting, I am going to add three services a SAN device which will get mounted when the cluster starts, an virtual ip that will set the ip and the apache service on the Active node.

[root@node01 ~]#ccs -h node01 --addresource fs name=web_fs device=/dev/emcpowerb1 mountpoint=/var/www fstype=ext3
[root@node01 ~]#ccs -h node01 --addresource ip address=192.168.10.10 monitor_link=yes
[root@node01 ~]#ccs -h node01 --addresource apache name=apache_server config_file=conf/httpd.conf server_root=/etc/httpd shutdown_wait=10

Then add the subservice on the cluster in the ordered way so that the filesystem get mounted then the ips is configured and then the apache service is started on the Active cluster node.
[root@node01 ~]#ccs -h node01 --addsubservice web fs ref=web_fs
[root@node01 ~]#ccs -h node01 --addsubservice web ip ref=192.168.10.10
[root@node01 ~]#ccs -h node01 --addsubservice web apache ref=apache_server

* Copy the cluster config on both the nodes
[root@node01 ~]#cman_tool version -r
[root@node01 ~]#ccs -h node01 --sync --activate

* Start the cluster service
[root@node01 ~]#ccs -h node01 --startall

[root@node01 ~]# cman_tool nodes
Node  Sts   Inc   Joined               Name
   1   M    112   2012-11-30 23:44:28  node01
   2   M    116   2012-11-30 23:44:28  node02

[root@node01 ~]# ccs -h node01 --lsnodes
node01: votes=1, nodeid=1
node02: votes=1, nodeid=2

[root@node01 ~]# ccs -h node01 --lsfencedev
scsi_dev: logfile=/var/log/cluster/fence_scsi.log, aptpl=1, devices=/dev/emcpowera, agent=fence_scsi

[root@node01 ~]#ccs -h node01 --lsfailoverdomain
web-failover: restricted=0, ordered=1, nofailback=0
  node01: priority=1
  node02: priority=2

[root@noe01 ~]# ccs -h node01 --lsservices
service: name=web, exclusive=0, domain=web-failover, autostart=1, recovery=relocate
  fs: ref=web_fs
  ip: ref=192.168.10.10
  apache: ref=apache_server
resources:
  fs: name=web_fs, device=/dev/emcpowerb1, mountpoint=/var/www, fstype=ext3
  ip: monitor_link=yes, address=192.168.10.10
  apache: name=apache_server, shutdown_wait=10, config_file=conf/httpd.conf, server_root=/etc/httpd

[root@node01 ~]# clustat
Cluster Status for webcluster @ Sun Dec  2 21:58:31 2012
Member Status: Quorate

 Member Name                                                     ID   Status
 ------ ----                                                     ---- ------
 node01                                                              1 Online, Local, rgmanager
 node02                                                              2 Online, rgmanager

 Service Name                                                     Owner (Last)                                                     State
 ------- ----                                                     ----- ------                                                     -----
 service:web                                                      node01                                                           started

The /etc/cluster/cluster.conf  can be edited manually but we just have to make sure that the version no is changed and run ccs_config_validate to check the validity of the config file.

[root@node01 ~]# ccs_config_validate
Configuration validates
[root@node01 ~]#cman_tool version -r
[root@node01 ~]#ccs -h node01 --stopall
[root@node01 ~]#ccs -h node01 --startall


Hope this document was helpful!!

Saturday, November 24, 2012

Migrating a zone from one host to another


Migrating a zone from one system to another involves the following steps:

1. Detaching the Zone - This leaves the zone on the originating system in the configured state.      Behind the scenes, the system generates a manifest of the information needed to validate that the zone can be successfully attached to a new host machine.
  
To detach a zone, first halt the zone, then perform the detach operation.
     
2. Data Migration - You move the data which represents the zone to a new host system.
  
3. Zone Configuration - You create the zone configuration on the new host using the zonecfg command, as shown in the example on this page.

Use the create subcommand to begin configuring a new zone,

4. Attaching the Zone - This validates that the host is capable of supporting the zone before the attach can succeed. The zone is left in the installed state. The syntax for attaching a zone is displayed on this page.


Detaching a Zone
host1# zoneadm -z work-zone halt
host1# zoneadm -z work-zone detach

Example of Migrating Zone Data
On host1:
host1# cd /export/zones
host1# tar cf work-zone.tar work-zone
host1# sftp host2
Connecting to host2...
Password:
sftp> cd /export/zones
sftp> put work-zone.tar
Uploading work-zone.tar to /export/zones/work-zone.tar
sftp> quit

On host2:
host2# cd /export/zones
host2# tar xf work-zone.tar

Creating a Zone Configuration
host2# zonecfg -z work-zone
work-zone: No such zone configured

Configuring a New Zone
zonecfg:work-zone> create -a /export/zones/work-zone
zonecfg:work-zone> commit
zonecfg:work-zone> exit

Attaching a Zone
host2# zoneadm -z work-zone attach


Friday, November 23, 2012

Increasing the size of ZFS volume online



How to increase the size of a ZFS Volume online.

ZFS is one of the great feature that Solaris has and good to see how easily we can manage the filesystem.

1. Display file system mount status

# df -h /mnt/test/test_vol/
Filesystem             size   used  avail capacity  Mounted on
/dev/zvol/dsk/test/test_vol
                       962M   1.0M   903M     1%    /test/test_vol


2. Display current ZFS volume size and reservation properties

# zfs get volsize,reservation test/test_vol
NAME          PROPERTY     VALUE          SOURCE
test/test_vol  volsize              1G                   -
test/test_vol  reservation        1G                  local

Confirms the volume is 1Gig in size with a 1Gig reservation.

3. Change the volsize ZFS property for test/test_vol to 10Gig.

# zfs set volsize=10g test/test_vol

4. Confirm the changes were made by displaying the volsize and reservation ZFS properties.

# zfs get volsize,reservation test/test_vol
NAME          PROPERTY     VALUE          SOURCE
test/test_vol  volsize              10G                 -
test/test_vol  reservation        10G                local

Notice the volsize is now set to 10G. At this point, the OS doesnt see the expanded file system. We will have to use "growfs" to increase the filesystem size

5. Run growfs to expand mounted file system

# growfs -M /mnt/test/test_vol /dev/zvol/rdsk/test/test_vol

The output should now show you the new volume size.

6. Confirm the size of mounted file system using df or dd.

# df -h /mnt/testtest_vol/
Filesystem             size   used  avail capacity  Mounted on
/dev/zvol/dsk/test/test_vol
                       9.9G   2.0M   9.8G     1%    /mnt/test/test_vol

That is how you increase the size of the ZFS volume online

Configuring Solaris Whole Root zone



Solaris zones allow virtual environments to run on the same physical system. Previously, the only way of partitioning an environment in a single box was by using an expensive high-end server, which was capable of physical partitioning which are still available with Oracle SUN servers

Zones provide a virtual operating system environment within a single physical instance of Solaris OS. Applications can run in an isolated, and secure environment. This isolation prevents an application running in one zone from monitoring or affecting an application running in a different zone.
A further important aspect of zones is that a failing application, such as one that would traditionally have leaked all available memory, or exhausted all CPU resources, can be limited to only affect the zone in which it is running. This is achieved by limiting the amount of physical resources on the system that the zone can use.

Another great components of the Solaris zones was the Resource management. It allows you to do the following:

-    Allocate specific computer resources, such as CPU time and memory.

-    Monitor how resource allocations are being used, and adjust the allocations when required

-    A new resource capping daemon (rcapd) allows you to regulate how much physical memory is used by a project, by "capping" the overall amount that can be used

There are two types of zones in Solaris they are Global Zone and Non Global zone

Think of a global zone as the server itself, the traditional view of a Solaris system as we all know it, where you can login as root and have full control of the entire system. The global zone is the default zone and is used for system-wide configuration and control. Every system contains a global zone and there can only be one global zone on a physical Solaris server.

A non-global zone is created from the global zone and also managed by it. You can have up to 8192 non-global zones on a single physical system the only real limitation is the capability of the server itself. Applications that run in a non-global zone are isolated from applications running in a separate non-global zone, allowing multiple versions of the same application to run on the same physical server

There are two types of Non Global zone

* Sparse Root zone

Simple zone that uses the default settings, which share most of the operating system with the global zone
To create such a zone involves letting the system pick default settings, which includes the loopback filesystem (lofs) read only mounts that share most of the OS.

* Whole Root zone

A zone that resides on it's own slice, which has it's own copy of the operating system.

Following are the steps to follow to create Whole Root Non Global Zone - with separate slice with it's own OS files.

1.Create file system and mount

# newfs /dev/rdsk/c0t0d0s4
newfs: construct a new file system /dev/rdsk/c0t1d0s0: (y/n)? y
/dev/rdsk/c0t1d0s0: 16567488 sectors in 16436 cylinders of 16 tracks, 63 sectors
        8089.6MB in 187 cyl groups (88 c/g, 43.31MB/g, 5504 i/g)
Super-block backups (for fsck -F ufs -o b=#) at:
 32, 88800, 177568, 266336, 355104, 443872, 532640, 621408, 710176, 798944,
 15700704, 15789472, 15878240, 15967008, 16055776, 16144544, 16233312,
 16322080, 16410848, 16499616,
# cd /etc/vfstab > and update following entry
/dev/dsk/c0t1d0s4  /dev/rdsk/c0t1d0s4  /export/test-zone   ufs   1   yes   -
# mkdir /export/test-zone
# mount  /export/test-zone  
# chmod 700 /export/test-zone

2.Configure the zone to not to use any inherit-pkg-dir’s

# zonecfg -z test-zone
test-zone: No such zone configured
Use 'create' to begin configuring a new zone.

Use of the "-b" option creates a blank zone as compared to a sparse zone includes lof filesysrtems of the global zone root

zonecfg:test-zone> create -b
zonecfg:test-zone> info
zonename: test-zone
zonepath:
brand: native
autoboot: false
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
zonecfg:test-zone> set autoboot=true
zonecfg:test-zone> set zonepath=/export/test-zone
zonecfg:test-zone> add net
zonecfg:test-zone:net> set address=192.168.3.171
zonecfg:test-zone:net> set physical=fjgi0
zonecfg:test-zone:net> set defrouter=192.168.3.1
zonecfg:test-zone:net> end
zonecfg:test-zone> verify
zonecfg:test-zone> commit
zonecfg:test-zone> exit
All the configuration settings for zone are set on the index file.

# cat /etc/zones/test-zone.xml  (DO NOT EDIT THIS FILE.  Use zonecfg to change related parameter of perticular zone.)


3.Now give the access to /export/test-zone directory . This will be the root file system for test-zone

# zoneadm -z test-zone verify

4.check the status and verify the zone:

# /usr/sbin/zoneadm list -vc
  ID NAME             STATUS        PATH                        BRAND    IP
   0 global                running          /                                 native       shared                            
   - test-zone            configured     /export/test-zone        native      shared

# zoneadm -z test-zone verify
If an error message is displayed and the zone fails to verify, make the corrections
Specified in the message and try the command again.


5.Install the newly created zone

# zoneadm -z test-zone install
Preparing to install zone <test-zone>.
Creating list of files to copy from the global zone.
Copying <118457> files to the zone.

  
6.Complete the basic configurations

When a zone is booted for the first time it will take you through the normal configuration questions as it you had booted from a new installation.
zlogin <zonename> -C to login to the new zone at its console

# zlogin -C test-zone
[].[]Language -
[].[]Type of terminal being used
[].[]Host name - ( ex. test-zone)
[].[]Security policy (Kerberos or standard UNIX) 
[].[]Naming service type (None is a valid response)
[].[]Naming service domain
[].[]Name server -
[].[]Default time zone :

Note: Host name for each non global zone should be uniq and must be different from Global zone

On the global zone, use the zoneadm list -vi to show current status of the new zone

# /usr/sbin/zoneadm list -vi
  ID NAME             STATUS       PATH                           BRAND    IP
   0 global                running        /                                     native        shared
   1 test-zone            running        /exports/test-zone         native        shared
  
Using the following command reboot , halt and boot the zone :

# zoneadm -z test-zone boot
# zoneadm -z test-zone reboot
# zoneadm -z test-zone halt