Solaris 10 Container Guide - Functionality status up to Solaris 10 10/09 and OpenSolaris 2009.06 - Detlef Drewanz, Ulrich Gräf, et al.
Version 3.1-en Solaris 10 Container Guide - 3.1 Effective: 30/11/2009 Table of contents Disclaimer....................................................................................................................................................VI Revision control............................................................................................................................................VI 1. Introduction......................................................................................
Version 3.1-en Solaris 10 Container Guide - 3.1 Effective: 30/11/2009 4.1.5.1. Software installation by the global zone – usage in all zones.......................................................................36 4.1.5.2. Software installation by the global zone – usage in a local zone...................................................................36 4.1.5.3. Software installation by the global zone – usage in the global zone..............................................................37 4.1.5.4.
Version 3.1-en Solaris 10 Container Guide - 3.1 Effective: 30/11/2009 4.5. Management and monitoring............................................................................................................55 4.5.1. Using boot arguments in zones............................................................................................55 4.5.2. Consolidating log information of zones.................................................................................56 4.5.3. Monitoring zone workload.......
Version 3.1-en Solaris 10 Container Guide - 3.1 Effective: 30/11/2009 5.2. Network............................................................................................................................................81 5.2.1. Change network configuration for shared IP instances........................................................81 5.2.2. Set default router for shared IP instance..............................................................................81 5.2.3.
Version 3.1-en Solaris 10 Container Guide - 3.1 Disclaimer Effective: 30/11/2009 Disclaimer Sun Microsystems GmbH does not offer any guarantee regarding the completeness and accuracy of the information and examples contained in this document. Revision control Version Contents 3.1 30/11/2009 Adjustment with content of „Solaris Container Leitfaden 3.
Version 3.1-en Version Solaris 10 Container Guide - 3.
Version 3.1-en Solaris 10 Container Guide - 3.1 1. Introduction Effective: 30/11/2009 1. Introduction [dd/ug] This guide is about Solaris Containers, how they work and how to use them. Although the original guide was developed in german [25], starting with version 3.1 we begin to deliver a version in english. By making Solaris 10 available on 31st January 2005, an operating system with groundbreaking innovations has been provided by Sun Microsystems.
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009 2. Functionality 2.1. Solaris Containers and Solaris Zones 2.1.1. Overview [ug] Solaris Zones is the term for a virtualized execution environment – a virtualization at the operating system level (in contrast to HW virtualization). Solaris Containers are Solaris Zones with Resource Management. The term is frequently used (in this document as well) as a synonym for Solaris Zones.
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009 Thus, a local zone is a Solaris environment that is separated from other zones and can be used independently. At the same time, many hardware and operating system resources are shared with other local zones, which causes little additional runtime expenditure. Local zones execute the same Solaris version as the global zone.
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009 2.1.2. Zones and software installation [dd] The respective requirements on local zones determine the manner in which software is installed in zones. There are two ways of supplying software in zones: 1. Software is usually supplied in pkg format. If this software is installed in the global zone with pkgadd, it will be automatically available to all other local zones as well.
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009 2.1.5. Zones and resource management [ug] In Solaris 9, resource management was introduced on the basis of projects, tasks and resource pools. In Solaris 10, resource management can be applied to zones as well.
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009 2.1.5.2. Memory resource management [ug] In Solaris 10 (in an update of Solaris 9 as well), main memory consumption can be limited at the level of zones, projects and processes. This is implemented with the so-called resource capping daemon (rcapd). A limit for physical memory consumption is defined for the respective objects.
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009 2.1.7. Zones and high availability [tf/du/hs] In the presence of all RAS capabilities, a zone has only the availability of a computer and it decreases with the number of components of the machine (MTBF). If this availability is not sufficient, so-called failover zones can be implemented using the HA Solaris Container Agent, allowing zones to be panned among cluster nodes (from Sun Cluster 3.1 08/05).
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009 2.1.9. Solaris container cluster (aka "zone cluster") [hs] In autumn 2008, within the scope of the Open HA Cluster Project, zone clusters were announced. The latter has also been available since Sun Cluster 3.2 1/09 in a commercial product as Solaris Container Cluster. A Solaris Container Cluster is the further development of the Solaris zone technology up to a virtual cluster, also called "zone cluster”.
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009 2.2. Virtualization technologies compared [ug] Conventional data center technologies include • Applications on separate computers This also includes multi-tier architectures with firewall, load balancing, web and application servers and databases. • Applications on a network of computers This includes distributed applications and job systems.
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009 2.2.1. Domains/physical partitions [ug] A computer can be partitioned by configuration into sub-computers (domain, partition). Domains are almost completely physically separated since electrical connections are turned off. Shared parts are either very failsafe (cabinet) or redundantly structured (service processor, power supplies).
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009 2.2.2. Logical partitions [ug] A minimal operating system called the hypervisor, that virtualizes the interface between the hardware and the OS of a computer, runs on the computer's hardware. A separate operating system (guest operating system) can be installed on the arising so-called virtual machines. In some implementations, the hypervisor runs as a normal application program; this involves increased overhead.
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009 2.2.3. Containers (Solaris zones) in an OS [ug] In an operating system installation, execution environments for applications and services are created that are independent of each other. The kernel becomes multitenant enabled: it exists only once but appears in each zone as though it was assigned exclusively. Separation is implemented by restricting access to resources, such as e.g.
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009 2.2.4. Consolidation in one computer [ug] The applications are installed on a computer and used under different userid. This is the type of consolidation feasible with modern operating systems. Advantages: • Application: All applications are executable as long as they are executable in the basic operating system and do not use their own OS drivers.
Version 3.1-en Solaris 10 Container Guide - 3.1 2. Functionality Effective: 30/11/2009 2.2.5. Summary of virtualization technologies [ug] The virtualization technologies discussed above can be summarized in the following table – compared to installation on a separate computer.
Version 3.1-en Solaris 10 Container Guide - 3.1 2.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3. Use Cases The following chapter discusses a variety of use cases for Solaris Containers and evaluates them. 3.1. Grid computing with isolation Requirement [ug] There is a need within a company to use free cycles on existing computers to perform a certain amount of computing work in the background. Grid software such as Sun GridEngine is suitable for this.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.2. Small web servers Requirement [ug] One of the following situations exists: • An Internet Service Provider (ISP) would like to have the option to set up web servers automatically, without additional costs. Based on this technology, the ISP wants to create an attractive offer for web servers with root access.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.3. Multi-network consolidation Requirement [dd] A company uses several different networks that are separated either by firewalls or by routers. Applications are run in the individual networks. The company would like to use the applications from different networks or security areas together on one physical system as an application itself does not require the capacity of a single system.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.4. Multi-network monitoring Requirement [dd] A company has several different networks that are separated into several levels either by firewalls or by routers. A variety of computers are installed in the individual networks.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.5. Multi-network backup Requirement [dd] A company has several different networks that are separated in different stages either by firewalls or by routers. Different computers are installed in the individual networks. The backup is to be simplified by allowing direct system backups in the individual networks to be performed from one location without having to connect the networks by routing.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.6. Consolidation development/test/integration/production Requirement [ug] Usually, further systems supporting the same application exist while an application is in production: • • • • Development systems Test systems Integration systems, with simulation of the application environment where applicable Disaster recovery systems Solution [ug] The different systems can be implemented on a computer in zones.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.7. Consolidation of test systems Requirement [ug] To test software and applications, there are many test systems in the data center environment that are only ever used for tests. They are mostly used only for qualitative tests where systems are not stressed as with performance tests. However, switching between test environments by reinstallation or restore of a shared test computer is out of question.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.8. Training systems Requirements [ug] In training departments, computers that are provided for training participants (including pupils/students) must frequently be reset. Solution [ug] The training systems are implemented by automatically installed zones: • • • • • • • • Sparse-root zones, that is, the zones inherit everything possible from the global zone. Automatic zone creation per script.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.9. Server consolidation Requirement [ug] In a data center, several applications run whose workload is too low (often much less than 50%). The computers themselves usually require a lot of electricity, cooling and space. Solution [ug] Several applications are consolidated in zones on one computer. The details: • • • • • • • • • This can usually be done with sparse root zones. Install exactly one application per zone.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.10. Confidentiality of data and processes Requirement [ug] In the data center, applications are running on different computers because • Certain departments want to be certain that data and processes are not seen by other departments. • Services for different customers are to be consolidated.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.11. Test systems for developers Requirement [ug] Developers need test systems to test their application. Frequently, the interaction of several computers must be tested as well. Resources for the installation of test systems are usually limited. Since the developers spend most of their time working on developing, test systems have a low workload.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.12. Solaris 8 and Solaris 9 containers for development Requirement [ug] There are still systems running in companies under Solaris 8 or Solaris 9 with self-developed software, and it must be possible for these systems to continue software development or debugging.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.13. Solaris 8 and Solaris 9 containers as revision systems Requirement [ug] For legal reasons or due to revision requests, it is necessary to have certain systems available for years under Solaris 8 or Solaris 9. • • • • Operating old systems under Solaris 8 or Solaris 9 is expensive (maintenance costs). Old hardware perhaps has not a maintenance contract. Migration to more up-to-date systems is expensive.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.14. Hosting for several companies on one computer Requirement [ug] An application service provider operates systems for a variety of companies. The systems are underutilized. Solution [ug] The applications are consolidated into zones on one computer. The details: • Separate file systems for each zone. • Option: Server integration into the company networks via NAT. • Option: Software installation per mount.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.15. SAP portals in Solaris containers Requirement [da] The operation of SAP system environments is becoming more complex.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.16. Upgrade- and Patch-management in a virtual environment Requirement [da] Virtualization by means of Solaris Containers allows the application to be disengaged from the hardware. An application can thus be run very simply on different servers. Data center operations must ensure the availability of these applications by means of suitable measures.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.17.
Version 3.1-en Solaris 10 Container Guide - 3.1 3. Use Cases Effective: 30/11/2009 3.18. Solaris Container Cluster (aka "zone cluster") Requirement [hs] • In a virtualized environment based on Solaris containers, the administrator of a zone should also be able to administer the cluster part of the application in the zone.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4. Best Practices The following chapter describes concepts for the implementation of architectures with Solaris containers. 4.1. Concepts 4.1.1.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.1.3. Comparison between sparse-root zones and whole-root zones [dd] From the considerations listed above, a comparison can be drawn between sparse-root zones and whole-root zones. In the vast majority of cases, sparse-root zones are used.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.1.5. Software installations in Solaris and zones [dd] The zones' directory structure is determined mainly from the need to install software with special needs in this area. Prior to creating the zones, this question should also be discussed extensively with regards to the intended purpose of the zone.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.1.5.3. Software installation by the global zone – usage in the global zone • non-pkg software − Software A is installed by the global zone e.g. in /software/A − /software/A is available to the global zone as a writable directory. − Configuration and data of software A can be modified by the global zone since this part is available and writable exclusively by this zone.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.1.6. Storage concepts 4.1.6.1. Storage for the root file system of the local zones [ug] It is usually sufficient for several zones to share a file system. If the influence from the zones to the file system is to be uncoupled, it is recommended to give each zone its own file system. With the new ZFS file system by Solaris 10, this can be achieved with little effort.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.1.6.4. Root disk layout [dd] Depending on availability requirements, root disks within a system are mirrored via internal disks or made available through a variety of controllers and external storage devices. Even nowadays, the entire OS installation is installed in / in many cases, that is, without any further partitioning into /usr or /opt.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.1.6.6. Options for using ZFS in local zones [hes] Depending on the manner of configuration of ZFS in zones, there are different application options for ZFS in zones.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.1.7. Network concepts 4.1.7.1. Introduction into networks and zones [dd] A network address is not mandatory when configuring a zone. However, services within a zone can only be reached from the outside through the network. Therefore at least one network address per zone is typically required (configurable with zonecfg).
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.1.7.4. Exclusive IP instance [dd] With exclusive IP instances, an almost complete separation of the network stacks between zones is achieved (from Solaris 10 8/07). A zone with an exclusive IP instance has its own copy of variables and tables that are used by the TCP/IP stack. As a result, this zone has its own IP routing table, ARP table, IPsec policies, IP filter rules or ndd settings.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.1.7.6. Zones and limitations in the network [dd] Zones have different limitations related to network configurations. The following table shows the differences separated by zone type and IP instance type. Some functionalities can be configured and used within zones themselves (+), affect only the functionality of zones, or can be used by services within a zone, or cannot be used at all in local zones (-).
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.1.8. Additional devices in zones 4.1.8.1. Configuration of devices [ug] In principle, a local zone uses no physical devices. To use network interfaces exclusively in one zone, the zone has to be configured as an exclusive IP zone (4.1.7.4 Exclusive IP instance). Disks or volumes can be inserted in a zone with zonecfg (5.1.12.6 Using a DVD drive in the local zone).
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.1.9. Separate name services in zones [ug] Name services include among other things the hosts database and the userids (passwd, shadow) and are configured with the file /etc/nsswitch.conf, which exists separately in each local zone. Name services are therefore defined in local zones independent of global zones. The most important aspects thereto are covered in this section.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.2. Paradigms Paradigms are design rules for the construction of zones. Depending on the application, a decision must be made which one of them should be applied. 4.2.1. Delegation of admin privileges to the application department [ug] Administration of an application can be delegated to the department responsible for the application.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.2.3. One application per zone [ug] Another paradigm is to always install one application per zone. A zone has very little overhead; basically, only the application processes are separated from the rest of the system by tagging them with the zone ID. This is implemented by the zone technology.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 administrator. With the software products described here, the requirements with respect to visualization and flexibilization of containers right up to disaster recovery concepts can be covered completely. Admin.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.2.5. Solaris Container Cluster [hs] One of the essential properties of containers is the possibility to delegate administrative tasks to the administrator or the user of one or more containers. If high availability of a container is ensured by the use of the HA Solaris container agent, this is not visible to the administration of the container concerned - which is usually the desired situation.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.3. Configuration and administration 4.3.1. Manual configuration of zones with zonecfg [ug] The command zonecfg is used to configure a zone; see the example in the Cookbook. The configuration merely describes the directory (zonecfg: zonepath ) in which the files for the zone are to be placed and how system directories of a local zone are created from the directory contents from the global zone (copy/loopback mount).
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 As a general rule, some guidelines are specified locally, for example: • • • • • Which file systems are to be inherited from the global zone (in he ri t- pk g- di r). File systems with software that is to be able to be used by each zone. Shared directories from the global zone. Whether the zone is to be automatically started when the computer is rebooted. The resource pool and other resource settings for the zone.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.4. Lifecycle management 4.4.1. Patching a system with local zones [dd/ug] In a Solaris system with native zones, the local zones always have the same patch status as in the global zone. This is independent of whether the local zone is a sparse root zone or a whole root zone. Therefore, patches on this type of system are installed in the global zone and thereby automatically in all local zones.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.4.3. Patching with upgrade server [ug] A zone is transported from the production computer to a so-called upgrade server (zoneadm detach and zoneadm attach ) that has the same version as the production server. On this upgrade server, the upgrade or the installation of patches is then carried out. Subsequently, the zone will have the new patch version.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.4.6. Re-installation and service provisioning instead of patching [dd] Patching of zones can force zones into single user mode, when system patches are applied. Zone patching can therefore lead into halting the services provided. This service interruption can vary in length depending on the type of the patches, the number of zones and the number of patches to be installed.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.4.8. Backup of zones with ZFS [ug] Starting with Solaris 10 10/08, zones on ZFS are officially supported. This considerably simplifies the backup of zones. Zones are installed on ZFS and a zone backup can be implemented via a ZFS file system snapshot.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.5.2. Consolidating log information of zones [dd] The use of zones as a runtime environment for services leads to an increase in the number of operating system environments that are part of an architecture. Although this is intended for reasons of enclosure or modularization of applications, it can lead to another problem.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.5.6. DTrace of processes within a zone [dd/ug] DTrace can be used to examine processes in zones. To do so, DTrace scripts can be extended by the variable zonename in order to e.g. only trace system calls of a zone global# dtrace -n 'syscall:::/zonename==”sparse”/ {@[probefunc]=count()}' or to measure the I/O of zones.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.6. Resource management 4.6.1. Types of resource management [dd] There are 3 different types of resource management in all: • Fair resources: Here, all resources are distributed fairly among all requesters and according to the defined rules. Example: Fair share scheduler for CPU resources • Capped resources: Resources can be seen by all requesters. As the upper limit of a resource is the capping value specified.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.6.2.3. Fair share scheduler (FSS) [ug] When multiple zones are running in one resource pool, then the distribution of CPU time among these zones is configurable. This configuration is active, when the workload of the processor set approaches 100% (full load). The process priorities will then be modified by the FSS to achieve the configured CPU share.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.6.3. Limiting memory resources [ug] Memory usage by zones is calculated almost exactly (since Solaris 10 8/07). This is done in the following way: First, the set of all memory segments of the processes in the zone is determined. Shared segments appear only once in this set. Next, the memory usage of the segments is added up. This calculation shows the memory requirements of the zone almost precisely.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.6.3.4. Limiting locked memory [ug] Real time programs and databases can establish the locking of virtual memory pages in the main memory. To do so, the programs require the privilege (proc_lock_memory) which must be configured for the zone. Databases in part use memory locking for shared segments to optimize performance (ISM – intimate shared memory). Nowadays, however, DISM (Dynamic ISM, e.g.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 4.7. Solaris container navigator [dd] The following segment navigates through the considerations required prior to the application of Solaris containers. These include both the examination of applications for compatibility with Solaris containers as well as the selection of various configuration options, and aspects of container installation.
Version 3.1-en Solaris 10 Container Guide - 3.1 4. Best Practices Effective: 30/11/2009 A-3: Self-qualification of an application in a container A-3-1: If necessary, note additional details in: “Qualification Best Practices for Application Support in Non-Global Zones” http://developers.sun.com/solaris/articles/zone_app_qualif.
Version 3.1-en Solaris 10 Container Guide - 3.1 4.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5. Cookbooks The Cookbooks chapter demonstrates the implementation of conceptional Best Practices with concrete examples. 5.1. Installation and configuration 5.1.1. Configuration files [dd] File: /etc/zones/index Lists all configured zones of a system, incl. zone status and root directory. # DO NOT EDIT: this file is automatically generated by zoneadm(1M) # and zonecfg(1M). Any manual changes will be lost.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.1.2.
Version 3.1-en Solaris 10 Container Guide - 3.1 5.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.1.3. Root disk layout [dd] The following table gives an example for a root disk layout of a system with a local zone. The assumptions on the size of the file systems represent empirical values and examples and can vary depending on the scope of the software installation and local requirements.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.1.4. Configuring a sparse root zone: required Actions [dd] To change a sparse root zone into a whole root zone it is necessary to re-install the zone after change of the configuration. Therefore, before starting the configuration, the type of zone to be created must be selected wisely. Decision support aids have already been discussed in this document.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.1.5. Configuring a whole root zone: required Actions [dd] Whole root zones do not contain inherit-pkg-dir and are generated with zonecfg create from the default file /etc/zone/SUNWdefault.xml. Subsequently, all inherit-pkg-dir are removed with remove inherit-pkg-dir. We advise against using zonecfg create -b because such a zone is created with the default from /etc/zones/SUNWblank.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.1.6. Zone installation [dd] Before using a zone for the first time it must be installed according to your configuration. The installation time varies depending on whether a sparse-root zone or a whole-root zone is installed.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.1.8. Uninstalling a zone [dd] Installed zones are uninstalled by zoneadm -z uninstall . In doing so, the following actions are carried out: • Deletion of data in the zonepath subdirectory • Conversion of zone status in /etc/zones/index into configured 5.1.9. Configuration and installation of a Linux branded zone with CentOS [dd] Linux branded zones can only be set up on x86/x64 systems.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.1.10. Configuration and installation of a Solaris 8/Solaris 9 container [ug] Solaris 8 containers and Solaris 9 containers can be created using 4 simple steps. 1. Planning how the data areas and network interfaces of the sources under Solaris 8 (or Solaris 9) can be represented in the Solaris 10 system. 2. Then the Solaris 8 system is archived.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.1.12. Storage within a zone [dd] Storage can be used in different ways in local zones.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.1.12.3. The global zone mounts a file system when the local zone is booted [dd] File systems can be provided to a local zone by the global zone not only as loopback filesystems. The following example shows how to mount a file system as an UFS directly when the zone is booted. The file system is mounted by the global zone in /zones//root.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.1.12.5. User level NFS server in a local zone [ug] The native NFS in the Solaris kernel can currently not be used as a server within a local zone. Instead, the unfs3d – an OpenSource NFS server – can be used (see the unfs3 project at SourceForge) which runs completely in userland and is therefore not subject to this limitation.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 For dynamic configuration, the device's major and minor number must be determined. This information can be obtained with the ls command in the global zone.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.1.12.8. Several zones share a file system [dd] The zone model makes it very easy for several zones to share a writable file system. This becomes possible if the global zone mounts a file system and makes this same file system available to several zones as a read/write loopback file system (Cookbook 5.1.12.2 The global zone supplies a file system per lofs to the local zone).
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.1.13. Configuring a zone by command file or template [dd] Zones can be configured by using command files for zonecfg or by the use of templates. This allows quick and automatic configuration of many zones avoiding errors. • Using the zonecfg command file A command file is set up for automatic configuration of the zone. 1.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.1.15. Accelerated automatic creation of zones on a ZFS file system [bf/ug] If a zone is configured on a ZFS file system, it can be duplicated very quickly by using ZFS snapshots. This procedure is described below by means of an example script. The script is available for download at http://blogs.sun.com/blogfinger/entry/how_to_create_a_lot.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.2. Network 5.2.1. Change network configuration for shared IP instances [dd] For an already configured zone with a shared IP instance, it may be necessary to change the physical interface or the network address. global# zonecfg -z zone1 zonecfg:zone1> info net net: address: 192.168.1.1/24 physical: bge1 zonecfg:zone1> select net physical=bge1 zonecfg:zone1:net> set physical=bge0 zonecfg:zone1:net> set address=192.168.2.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.2.4. Change network configuration from shared IP instance to exclusive IP instance [dd] Zones that are already configured are run with shared IP instances up to Solaris 10 11/06. With the introduction of Solaris 10 8/07, it is possible to run zones with an own IP stack.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.2.6. IP filter between exclusive IP zones on a system [dd] The usual configuration rules for IP filters must be followed for the use of IP filters in exclusive IP zones. This is possible since, for exclusive IP instances, the physical network port was assigned to the zone. After configuring the IP filter per zone, IP filter is activated in each zone to work independently in each IP instance.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.2.7.2. Zones in separate network segments using the shared IP instance [dd/ug] Two local zones, zone1 and zone2, are located in separated network segments and provide services for these network segments. • • • • • Each local zone should have its own physical interface in the network segment. No other network is connected to the network segment. Routing is not used. There should be no communication between local zones.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.2.7.3. Zones in separate network segments using exclusive IP instances [dd/ug] Two local zones, zone1 and zone2, are located in separated network segments and provide services for these network segments. • • • • • Each local zone should have its own physical interface. No additional network is connected to the network segment. Routing is not used. There should be no communication between the local zones.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.2.7.4. Zones in separate networks using the shared IP instance [dd/ug] Two local zones, zone1 and zone2, are located in separated networks and provide services for other networks. − Each local zone should have its own physical interface in the network. − Additional networks are connected to the network segment. − Routing is used. − There should be no communication between the local zones.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.2.7.5. Zones in separate networks using exclusive IP instances [dd] Two local zones, zone1 and zone2, are located in separated networks and provide services for other networks. − − − − − Each local zone should have its own physical interface in the network. Additional networks are connected to the network segment. Routing is used. There should be no communication between local zones.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.2.7.6. Zones connected to independent customer networks using the shared IP instance [dd/ug] Two local zones, zone1 and zone2, are located in separated networks and provide services for a variety of customers in their own networks. • Each local zone should have its own physical interface in the network. • Additional customer networks are connected to the network segment.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 192.168.102.0 192.168.101.0 Customer Network B Customer Network A NAT router 192.168.101.201 192.168.102.201 192.168.201.2 192.168.202.2 NAT router bge2:2 - 192.168.202.1 bge1:1 - 192.168.201.1 Def router - 192.168.201.2 Def router - 192.168.202.2 Zone 2 Zone 1 bge0 - 192.168.1.1 bge1 - 0.0.0.0 bge2 - 0.0.0.0 reject route 192.168.201.1 192.168.202.1 Global Zone 192.168.1.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.2.7.7. Zones connected to independent customer networks using exclusive IP instances [dd/ug] Two local zones, zone1 and zone2, are located in separated networks and provide services for a variety of customers in their own networks. • Each local zone should have its own physical interface . • Additional customer networks are connected to the network segment.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 192.168.102.0 192.168.101.0 Customer Network B Customer Network A NAT router 192.168.101.201 192.168.102.201 192.168.201.2 192.168.202.2 NAT router bge2 - 192.168.202.1 bge1 - 192.168.201.1 Def router - 192.168.201.2 Def router - 192.168.202.2 ip type: exclusive Zone 1 ip type: exclusive Zone 2 bge0 - 192.168.1.1 ip type: shared Global Zone 192.168.1.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 • In order to avoid communication between the local zones through the shared TCP/IP stack, reject routes must be set in the global zone that prevent communication between the IP addresses of the two zones (or the use of ipfilter). route add 192.168.201.1 192.168.202.1 -interface -reject route add 192.168.202.1 192.168.201.1 -interface -reject route add 192.168.200.1 192.168.202.1 -interface -reject route add 192.168.202.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.2.7.9. Connection of zones through an external load balancing router using exclusive IP instances [dd/ug] A web server in zone1 is contacted from the internet and needs the application server in zone2 to fulfill the orders. − Zone1 should be connected to the internet through a separate network. − The connection from zone1 to zone2 should take place through an external load balancing router.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 LoadBalancer 192.168.200.2 Addressing zone 2 as 192.168.102.1 bge3 - 192.168.201.1 bge1 - 192.168.200.1 Def router - 192.168.200.2 ip type: exclusive bge2 - 192.168.201.2 ip type: exclusive bge4 - 192.168.201.3 ip type: exclusive Zone 2.1 Zone 2.2 Zone 1 bge0 - 192.168.1.1 ip type: shared Global Zone 192.168.1.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.3. Lifecycle management 5.3.1. Booting a zone [dd] zoneadm -z boot starts up a zone, mounts the file systems, initializes the network interfaces, sets the resource controls and starts the service manager of the zone. When the zone is first started, as with Solaris re-installation, all smf service manifests are imported and the initial smf repository for the zone is created.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 Alternatively, set the boot arguments permanently in a zone configuration: global# zonecfg -z keetonga zonecfg:keetonga> info bootargs bootargs: zonecfg:keetonga> set bootargs="-m verbose" zonecfg:keetonga> info bootargs bootargs: -m verbose zonecfg:keetonga> commit zonecfg:keetonga> exit global# zoneadm -z keetonga halt bash-3.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.3.4. Software installation with provisioning system [ug] The N1 SPS software can provision software in zones as well. The requirements are: • A writable directly where the software can be installed. This can be /opt. /opt is not allowed to be configured as inher it-pkg-dir in this case.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.3.6. Zone migration within a system [ug] Let us assume that a zone named "test" is to be moved to another directory. Currently, this zone is located on /export/home/zone/test (zonepath).
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.3.7. Duplicating zones with zoneadm clone [ug] Zone installation can be accelerated with zoneadm ... clone . In this example, a zone named test is already configured and installed. global# zoneadm list -vc ID NAME STATUS 0 global running - test installed PATH / /container/test BRAND native native IP shared shared This zone serves no as the basis for another zone installation.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 Now, zone test1 is configured in exactly the same way as zone test but has its own zonepath.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.3.8. Duplicating zones with zoneadm detach/attach and zfs clone [ug] First, the zone "test" is moved to its own ZFS file system. The file system must only be available from root otherwise an error message will appear.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.3.9. Moving a zone between a sun4u and a sun4v system [ug] Currently, two architectures with SPARC-processors are available from Sun Microsystems that are both supported by Solaris 10. The sun4u architecture in the larger data center computers with higher single processor performance, and the sun4v architecture in the CMT computers (Chip Multi Threading) with many cores and threads as well as virtualization support.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 Next, the zone is to be transported to a sun4v system named bashful. To do so, the contents and the configuration are saved: root@tiger [23] # cd /zone root@tiger [23] # tar cEvf u0.tar u0 # zone sichern root@tiger [24] # zonecfg -z u0 export >u0.export # Konfiguration The zone is created by means of the saved configuration: root@bashful [40] # zonecfg -z u0v
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.3.10. Shutting down a zone [dd] Zones can be shut down from the local zone itself or from the global zone. Depending on which option is used, running services are either completed or simply stopped. • zoneadm -z zone halt is called in the global zone, halts a zone and stops all processes in the zone immediately.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 The BE is now available e.g. under /.alt.s10-807+1. Next, the boot archive of this BE is updated and the BE is unmounted again. bootadm update-archive -R /.alt.s10-807+1 luumount s10-807+1 Finally, the new BE can be activated. To run the new BE, the total system must be restarted. It is important that live upgrade performs a final synchronization when turning the current BE off.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.4. Management and monitoring 5.4.1. DTrace in a local zone [dd] Since Solaris 10 11/06, DTrace can be applied within local zones to processes of this zone. To enable DTrace, it is necessary to extend the set of privileges for the local zone with dtrace_proc and dtrace_user. Without these privileges, no DTrace probes will be available in the zone.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.5. Resource management 5.5.1. Limiting the /tmp-size within a zone [dd] In many cases, /tmp is used as tmpfs in swap. This leads to the swap area being shared by all zones by /tmp in each zone. Although the /tmp areas are visible to each zone itself, global resources are used. To limit the use of space, /tmp should be mounted in the /etc/vfstab of zones with the size option.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.5.4. Fair share scheduler [ug] The ratio of CPU usage between zones or projects can be set. This is implemented by he so-called fair share scheduler. CPU shares are allocated as follows: • For zones, by using add rctl and the attribute zone.cpu-shares in the zonecfg command. • For projects, this is set in the project database (/etc/project, or NIS or LDAP) and the attribute project.cpu-shares (global and/or local zone).
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 created with poolcfg and pooladm. 5.5.9. Dynamic resource pools for zones [dd] As already described in 4.6.2.5 Dynamic resource pools, dynamic resource pools can very easily be used for zones since Solaris 10 8/07. The number of CPUs required is specified in the zone configuration. If the corresponding zone is started up, a temporary resource pool is created and assigned to the starting zone.
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 5.5.10. Limiting the physical main memory consumption of a project [dd] To limit the physical main memory of a project, the resource capping daemon rcapd(1M) can be used. If the resident set size (RSS) of a project exceeds the capping target (rcap.max-rss in bytes), rcapd reduces the RSS and swaps used memory pages to the paging device. rcapd is configured by rcapadm(1M) and monitored with rcapstat(1).
Version 3.1-en Solaris 10 Container Guide - 3.1 5. Cookbooks Effective: 30/11/2009 Settings for swap (= virtual memory), locked memory and other resource controls of a zone can be queried at runtime with prctl -i zone . global # prctl -i zone zone1 zone: 22: zone1 NAME PRIVILEGE VALUE zone.max-swap privileged 200MB system 16.0EB zone.max-locked-memory privileged 20.0MB system 16.0EB zone.max-shm-memory system 16.0EB zone.max-shm-ids system 16.8M . . . zone.max-lwps system 2.15G zone.
Version 3.1-en Solaris 10 Container Guide - 3.1 Supplement Effective: 30/11/2009 Supplement A. Solaris Container in OpenSolaris A.1. OpenSolaris – general [dd] In 2005, Sun Microsystems started OpenSolaris as an OpenSource project in order to support and advance the developer community around Solaris (http://www.opensolaris.org/os/). In May 2008, the first OpenSolaris operating system was completed (http://www.opensolaris.com/). The first distribution, OpenSolaris 2008.
Version 3.1-en Solaris 10 Container Guide - 3.1 A. Solaris Container in OpenSolaris Effective: 30/11/2009 A.1. Cookbook: Configuring an ipkg zone The configuration of the zone is done as usual with zonecfg(1M). root@cantaloup:~# zonecfg -z keetonga keetonga: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:keetonga> create zonecfg:keetonga> set zonepath=/zones/keetonga zonecfg:keetonga> add net zonecfg:keetonga:net> set physical=e1000g0 zonecfg:keetonga:net> set address=192.
Version 3.1-en Solaris 10 Container Guide - 3.1 B. References Effective: 30/11/2009 B. References [1] Jeff Victor, "Solaris Containers Technology Architecture Guide", Sun Blueprint, May 2006, http://www.sun.com/blueprints/0506/819-6186.html [2] Jeff Victor, "Zones and Containers FAQ", OpenSolaris FAQ, http://opensolaris.org/os/community/zones/faq/ [3] Sun Microsystems Inc., "Solaris Containers Learning Centre", http://www.sun.com/software/solaris/containers_learning_center.