Ignite-UX (IUX) Frequently Asked Questions (July 2013)
though not the current installation.
Care should be taken when creating extra VxVM disk groups other than
rootdg via the Ignite-UX GUI. During installation, no validation is
done on a disk group name to see if it conflicts with a disk group
name already in use for another unused disk on the system. If the
name conflicts with another disk group, the attempt to create a disk
group of the same name fails. This is a feature of VxVM to prevent
the creation of duplicate disk groups. If you do encounter this
problem, you are presented with something like the following:
* Starting VxVM
* Creating VxVM Disk "c17t13d0" (1/0/12/0/0/4/1.13.0).
* Creating VxVM Disk "c17t12d0" (1/0/12/0/0/4/1.12.0).
* Creating VxVM Disk "c17t11d0" (1/0/12/0/0/4/1.11.0).
* Creating VxVM Disk "c17t10d0" (1/0/12/0/0/4/1.10.0).
* Adding disk "c17t10d0" to rootdg.
* Enabling VxVM
* Creating disk group "dg01".
* Creating disk group "dg02".
vxvm:vxdg:
ERROR: Disk group dg02: cannot create:
Disk group exists and is imported
ERROR: Command "/sbin/vxdg init dg02 dg0201=c17t12d0 dg0202=c17t13d0"
failed.
The configuration process has incurred an error, would you like
to push a shell for debugging purposes? (y/[n]):
If the affected disk group contains any important volume like /usr,
/opt or /var, this installation is unlikely to succeed as those
volumes are needed in order to boot and bring a system up. If the
volumes are not essential, then it may be possible to ignore all the
errors and the system may continue to boot. There may be additional
VxVM errors beyond this initial one.
- To continue and ignore the error, answer "y", and at the shell
prompt type "exit 2". Then press return.
Or
- If you do not wish to continue the installation, answer "n". The
system reboots, and then you must reinstall avoiding the use of
duplicate disk group names.
One potential workaround for this problem is to set the control
parameter clean_all_disks to true in *INSTALLFS (refer to
instl_adm(4) for more information on this keyword). However, this is
not recommended in most instances and extreme caution is urged
because when this variable is set to true, ALL the disks found on the
system are cleaned, which may not be desirable. All data is lost on
all disks on the system when this variable is set, even if the disks
are not explicitly selected for the installation. In a SAN
environment or MC/ServiceGuard cluster, the system you are installing
may be able to see disks currently used by other systems. Setting
clean_all_disks to true removes the data from them as well, which is
not a desirable situation. However, this does clean off the disk