Solaris zones allow virtual environments to run on the same physical system. Previously, the only way of partitioning an environment in a single box was by using an expensive high-end server, which was capable of physical partitioning which are still available with Oracle SUN servers
Zones provide a virtual operating system environment within a single physical instance of Solaris OS. Applications can run in an isolated, and secure environment. This isolation prevents an application running in one zone from monitoring or affecting an application running in a different zone.
A further important aspect of zones is that a failing application, such as one that would traditionally have leaked all available memory, or exhausted all CPU resources, can be limited to only affect the zone in which it is running. This is achieved by limiting the amount of physical resources on the system that the zone can use.
Another great components of the Solaris zones was the Resource management. It allows you to do the following:
- Allocate specific computer resources, such as CPU time and memory.
- Monitor how resource allocations are being used, and adjust the allocations when required
- A new resource capping daemon (rcapd) allows you to regulate how much physical memory is used by a project, by "capping" the overall amount that can be used
There are two types of zones in Solaris they are Global Zone and Non Global zone
Think of a global zone as the server itself, the traditional view of a Solaris system as we all know it, where you can login as root and have full control of the entire system. The global zone is the default zone and is used for system-wide configuration and control. Every system contains a global zone and there can only be one global zone on a physical Solaris server.
A non-global zone is created from the global zone and also managed by it. You can have up to 8192 non-global zones on a single physical system the only real limitation is the capability of the server itself. Applications that run in a non-global zone are isolated from applications running in a separate non-global zone, allowing multiple versions of the same application to run on the same physical server
There are two types of Non Global zone
* Sparse Root zone
Simple zone that uses the default settings, which share most of the operating system with the global zone
To create such a zone involves letting the system pick default settings, which includes the loopback filesystem (lofs) read only mounts that share most of the OS.
* Whole Root zone
A zone that resides on it's own slice, which has it's own copy of the operating system.
Following are the steps to follow to create Whole Root Non Global Zone - with separate slice with it's own OS files.
1.Create file system and mount
# newfs /dev/rdsk/c0t0d0s4
newfs: construct a new file system /dev/rdsk/c0t1d0s0: (y/n)? y
/dev/rdsk/c0t1d0s0: 16567488 sectors in 16436 cylinders of 16 tracks, 63 sectors
8089.6MB in 187 cyl groups (88 c/g, 43.31MB/g, 5504 i/g)
Super-block backups (for fsck -F ufs -o b=#) at:
32, 88800, 177568, 266336, 355104, 443872, 532640, 621408, 710176, 798944,
15700704, 15789472, 15878240, 15967008, 16055776, 16144544, 16233312,
16322080, 16410848, 16499616,
# cd /etc/vfstab > and update following entry
/dev/dsk/c0t1d0s4 /dev/rdsk/c0t1d0s4 /export/test-zone ufs 1 yes -
# mkdir /export/test-zone
# mount /export/test-zone
# chmod 700 /export/test-zone
2.Configure the zone to not to use any inherit-pkg-dir’s
# zonecfg -z test-zone
test-zone: No such zone configured
Use 'create' to begin configuring a new zone.
Use of the "-b" option creates a blank zone as compared to a sparse zone includes lof filesysrtems of the global zone root
zonecfg:test-zone> create -b
zonecfg:test-zone> info
zonename: test-zone
zonepath:
brand: native
autoboot: false
bootargs:
pool:
limitpriv:
scheduling-class:
ip-type: shared
zonecfg:test-zone> set autoboot=true
zonecfg:test-zone> set zonepath=/export/test-zone
zonecfg:test-zone> add net
zonecfg:test-zone:net> set address=192.168.3.171
zonecfg:test-zone:net> set physical=fjgi0
zonecfg:test-zone:net> set defrouter=192.168.3.1
zonecfg:test-zone:net> end
zonecfg:test-zone> verify
zonecfg:test-zone> commit
zonecfg:test-zone> exit
All the configuration settings for zone are set on the index file.
# cat /etc/zones/test-zone.xml (DO NOT EDIT THIS FILE. Use zonecfg to change related parameter of perticular zone.)
3.Now give the access to /export/test-zone directory . This will be the root file system for test-zone
# zoneadm -z test-zone verify
4.check the status and verify the zone:
# /usr/sbin/zoneadm list -vc
ID NAME STATUS PATH BRAND IP
0 global running / native shared
- test-zone configured /export/test-zone native shared
# zoneadm -z test-zone verify
If an error message is displayed and the zone fails to verify, make the corrections
Specified in the message and try the command again.
5.Install the newly created zone
# zoneadm -z test-zone install
Preparing to install zone <test-zone>.
Creating list of files to copy from the global zone.
Copying <118457> files to the zone.
6.Complete the basic configurations
When a zone is booted for the first time it will take you through the normal configuration questions as it you had booted from a new installation.
zlogin <zonename> -C to login to the new zone at its console
# zlogin -C test-zone
[].[]Language -
[].[]Type of terminal being used
[].[]Host name - ( ex. test-zone)
[].[]Security policy (Kerberos or standard UNIX)
[].[]Naming service type (None is a valid response)
[].[]Naming service domain
[].[]Name server -
[].[]Default time zone :
Note: Host name for each non global zone should be uniq and must be different from Global zone
On the global zone, use the zoneadm list -vi to show current status of the new zone
# /usr/sbin/zoneadm list -vi
ID NAME STATUS PATH BRAND IP
0 global running / native shared
1 test-zone running /exports/test-zone native shared
Using the following command reboot , halt and boot the zone :
# zoneadm -z test-zone boot
# zoneadm -z test-zone reboot
# zoneadm -z test-zone halt