5.5.5 HP IBRIX X9000 Release Notes

IbrixFusionManager-<version>
4. If the RPM is present on the node, remove the RPM:
# rpm -e IbrixFusionManager-<version>
Migration to an agile management console configuration
When a cluster is configured with a dedicated, standard management console, the Quick Restore
installation procedure installs both the IbrixFusionManager and the IbrixServer packages on the
dedicated, standard management console and on each node of the cluster. If you then attempt to
migrate to an agile management console configuration, the migration procedure will fail. To avoid
the failure, uninstall the IbrixServer package from the dedicated, standard management console, and
uninstall the IbrixFusionManager package from the file serving nodes. You can then perform the
migration.
Complete the following steps:
1. On the standard management console, check for the IbrixServer RPM:
# rpm qa | grep i IbrixServer
If the RPM is present, the output will be similar to the following:
IbrixServer-<version>
2. If the IbrixServer RPM is present, uninstall the RPM:
# rpm -e IbrixServer-<version>
3. On each file serving node, check for the Ibrix Fusion Manager RPM:
# rpm qa | grep i IbrixFusionManager
If the RPM is present, the output will be similar to the following:
IbrixFusionManager-<version>
4. If the RPM is present on the node, remove the RPM:
# rpm -e IbrixFusionManager-<version>
Remote replication
When remote replication is running, if the target file system is unexported, the replication of data
will stop. To ensure that replication takes place, do not unexport a file system that is the target
for a replication (for example, with ibrix_exportcfr -U).
Remote replication will fail if the target file system is unmounted. To ensure that replication takes
place, do not unmount the target file system.
When continuous remote replication is used and File Serving Nodes are configured for High
Availability, you will need to take the following steps following failure of a node:
1. Stop continuous remote replication.
2. After the migration to the surviving node is complete, restart continuous remote replication
to heal the replica.
If these steps are not taken, any changes that had not yet been replicated from the failed node
will be lost.
Continuous remote replication will fail if the configured cluster interface and the corresponding
cluster Virtual Interface (VIF) for the management console are in a private network on either the
source or target cluster. By default continuous remote replication uses the cluster interface and
the Cluster VIF (the ibrixinit C and v options, respectively) for communication between
the source cluster and the target cluster. To work around potential continuous remote replication
communication errors, it is important that the ibrixinit -C and -v arguments correspond to
a public interface and a public cluster VIF, respectively. If necessary, the
nl
10 Fixes