Saturday, November 24, 2012

Migrating a zone from one host to another


Migrating a zone from one system to another involves the following steps:

1. Detaching the Zone - This leaves the zone on the originating system in the configured state.      Behind the scenes, the system generates a manifest of the information needed to validate that the zone can be successfully attached to a new host machine.
  
To detach a zone, first halt the zone, then perform the detach operation.
     
2. Data Migration - You move the data which represents the zone to a new host system.
  
3. Zone Configuration - You create the zone configuration on the new host using the zonecfg command, as shown in the example on this page.

Use the create subcommand to begin configuring a new zone,

4. Attaching the Zone - This validates that the host is capable of supporting the zone before the attach can succeed. The zone is left in the installed state. The syntax for attaching a zone is displayed on this page.


Detaching a Zone
host1# zoneadm -z work-zone halt
host1# zoneadm -z work-zone detach

Example of Migrating Zone Data
On host1:
host1# cd /export/zones
host1# tar cf work-zone.tar work-zone
host1# sftp host2
Connecting to host2...
Password:
sftp> cd /export/zones
sftp> put work-zone.tar
Uploading work-zone.tar to /export/zones/work-zone.tar
sftp> quit

On host2:
host2# cd /export/zones
host2# tar xf work-zone.tar

Creating a Zone Configuration
host2# zonecfg -z work-zone
work-zone: No such zone configured

Configuring a New Zone
zonecfg:work-zone> create -a /export/zones/work-zone
zonecfg:work-zone> commit
zonecfg:work-zone> exit

Attaching a Zone
host2# zoneadm -z work-zone attach


Friday, November 23, 2012

Increasing the size of ZFS volume online



How to increase the size of a ZFS Volume online.

ZFS is one of the great feature that Solaris has and good to see how easily we can manage the filesystem.

1. Display file system mount status

# df -h /mnt/test/test_vol/
Filesystem             size   used  avail capacity  Mounted on
/dev/zvol/dsk/test/test_vol
                       962M   1.0M   903M     1%    /test/test_vol


2. Display current ZFS volume size and reservation properties

# zfs get volsize,reservation test/test_vol
NAME          PROPERTY     VALUE          SOURCE
test/test_vol  volsize              1G                   -
test/test_vol  reservation        1G                  local

Confirms the volume is 1Gig in size with a 1Gig reservation.

3. Change the volsize ZFS property for test/test_vol to 10Gig.

# zfs set volsize=10g test/test_vol

4. Confirm the changes were made by displaying the volsize and reservation ZFS properties.

# zfs get volsize,reservation test/test_vol
NAME          PROPERTY     VALUE          SOURCE
test/test_vol  volsize              10G                 -
test/test_vol  reservation        10G                local

Notice the volsize is now set to 10G. At this point, the OS doesnt see the expanded file system. We will have to use "growfs" to increase the filesystem size

5. Run growfs to expand mounted file system

# growfs -M /mnt/test/test_vol /dev/zvol/rdsk/test/test_vol

The output should now show you the new volume size.

6. Confirm the size of mounted file system using df or dd.

# df -h /mnt/testtest_vol/
Filesystem             size   used  avail capacity  Mounted on
/dev/zvol/dsk/test/test_vol
                       9.9G   2.0M   9.8G     1%    /mnt/test/test_vol

That is how you increase the size of the ZFS volume online

Configuring Solaris Whole Root zone



Solaris zones allow virtual environments to run on the same physical system. Previously, the only way of partitioning an environment in a single box was by using an expensive high-end server, which was capable of physical partitioning which are still available with Oracle SUN servers

Zones provide a virtual operating system environment within a single physical instance of Solaris OS. Applications can run in an isolated, and secure environment. This isolation prevents an application running in one zone from monitoring or affecting an application running in a different zone.
A further important aspect of zones is that a failing application, such as one that would traditionally have leaked all available memory, or exhausted all CPU resources, can be limited to only affect the zone in which it is running. This is achieved by limiting the amount of physical resources on the system that the zone can use.

Another great components of the Solaris zones was the Resource management. It allows you to do the following:

-    Allocate specific computer resources, such as CPU time and memory.

-    Monitor how resource allocations are being used, and adjust the allocations when required

-    A new resource capping daemon (rcapd) allows you to regulate how much physical memory is used by a project, by "capping" the overall amount that can be used

There are two types of zones in Solaris they are Global Zone and Non Global zone

Think of a global zone as the server itself, the traditional view of a Solaris system as we all know it, where you can login as root and have full control of the entire system. The global zone is the default zone and is used for system-wide configuration and control. Every system contains a global zone and there can only be one global zone on a physical Solaris server.

A non-global zone is created from the global zone and also managed by it. You can have up to 8192 non-global zones on a single physical system the only real limitation is the capability of the server itself. Applications that run in a non-global zone are isolated from applications running in a separate non-global zone, allowing multiple versions of the same application to run on the same physical server

There are two types of Non Global zone

* Sparse Root zone

Simple zone that uses the default settings, which share most of the operating system with the global zone
To create such a zone involves letting the system pick default settings, which includes the loopback filesystem (lofs) read only mounts that share most of the OS.

* Whole Root zone

A zone that resides on it's own slice, which has it's own copy of the operating system.

Following are the steps to follow to create Whole Root Non Global Zone - with separate slice with it's own OS files.

1.Create file system and mount

# newfs /dev/rdsk/c0t0d0s4
newfs: construct a new file system /dev/rdsk/c0t1d0s0: (y/n)? y
/dev/rdsk/c0t1d0s0: 16567488 sectors in 16436 cylinders of 16 tracks, 63 sectors
        8089.6MB in 187 cyl groups (88 c/g, 43.31MB/g, 5504 i/g)
Super-block backups (for fsck -F ufs -o b=#) at:
 32, 88800, 177568, 266336, 355104, 443872, 532640, 621408, 710176, 798944,
 15700704, 15789472, 15878240, 15967008, 16055776, 16144544, 16233312,
 16322080, 16410848, 16499616,
# cd /etc/vfstab > and update following entry
/dev/dsk/c0t1d0s4  /dev/rdsk/c0t1d0s4  /export/test-zone   ufs   1   yes   -
# mkdir /export/test-zone
# mount  /export/test-zone  
# chmod 700 /export/test-zone

2.Configure the zone to not to use any inherit-pkg-dir’s

# zonecfg -z test-zone
test-zone: No such zone configured
Use 'create' to begin configuring a new zone.

Use of the "-b" option creates a blank zone as compared to a sparse zone includes lof filesysrtems of the global zone root

zonecfg:test-zone> create -b
zonecfg:test-zone> info
zonename: test-zone
zonepath:
brand: native
autoboot: false
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
zonecfg:test-zone> set autoboot=true
zonecfg:test-zone> set zonepath=/export/test-zone
zonecfg:test-zone> add net
zonecfg:test-zone:net> set address=192.168.3.171
zonecfg:test-zone:net> set physical=fjgi0
zonecfg:test-zone:net> set defrouter=192.168.3.1
zonecfg:test-zone:net> end
zonecfg:test-zone> verify
zonecfg:test-zone> commit
zonecfg:test-zone> exit
All the configuration settings for zone are set on the index file.

# cat /etc/zones/test-zone.xml  (DO NOT EDIT THIS FILE.  Use zonecfg to change related parameter of perticular zone.)


3.Now give the access to /export/test-zone directory . This will be the root file system for test-zone

# zoneadm -z test-zone verify

4.check the status and verify the zone:

# /usr/sbin/zoneadm list -vc
  ID NAME             STATUS        PATH                        BRAND    IP
   0 global                running          /                                 native       shared                            
   - test-zone            configured     /export/test-zone        native      shared

# zoneadm -z test-zone verify
If an error message is displayed and the zone fails to verify, make the corrections
Specified in the message and try the command again.


5.Install the newly created zone

# zoneadm -z test-zone install
Preparing to install zone <test-zone>.
Creating list of files to copy from the global zone.
Copying <118457> files to the zone.

  
6.Complete the basic configurations

When a zone is booted for the first time it will take you through the normal configuration questions as it you had booted from a new installation.
zlogin <zonename> -C to login to the new zone at its console

# zlogin -C test-zone
[].[]Language -
[].[]Type of terminal being used
[].[]Host name - ( ex. test-zone)
[].[]Security policy (Kerberos or standard UNIX) 
[].[]Naming service type (None is a valid response)
[].[]Naming service domain
[].[]Name server -
[].[]Default time zone :

Note: Host name for each non global zone should be uniq and must be different from Global zone

On the global zone, use the zoneadm list -vi to show current status of the new zone

# /usr/sbin/zoneadm list -vi
  ID NAME             STATUS       PATH                           BRAND    IP
   0 global                running        /                                     native        shared
   1 test-zone            running        /exports/test-zone         native        shared
  
Using the following command reboot , halt and boot the zone :

# zoneadm -z test-zone boot
# zoneadm -z test-zone reboot
# zoneadm -z test-zone halt