7.0
Table Of Contents
- Reference Architecture
- Contents
- vRealize Automation Reference Architecture Guide
- Updated Information
- New Features in vRealize Automation Since Release 6.2
- Initial Deployment and Configuration Recommendations
- vRealize Automation Deployment
- vRealize Business Standard Edition Deployment Considerations
- vRealize Automation Scalability
- vRealize Business Standard Edition Scalability
- vRealize Automation High Availability Configuration Considerations
- vRealize Business Standard Edition High Availability Considerations
- vRealize Automation Hardware Specifications
- vRealize Automation Small Deployment Requirements
- vRealize Automation Medium Deployment Requirements
- vRealize Automation Large Deployment Requirements
Database Deployment
vRealize Automation automatically clusters the appliance database in 7.0 and later releases. All new 7.0
and later deployments must use the internal appliance database. vRealize Automation 6.2.x instances
which are upgrading can use an external appliance database but it is recommended that these databases
be migrated internally. See the vRealize Automation 7.0 product documentation for more information on
the upgrade process.
For production deployments of the Infrastructure components, use a dedicated database server to host
the Microsoft SQL Server (MSSQL) databases. vRealize Automation requires machines that
communicate with the database server to be configured to use Microsoft Distributed Transaction
Coordinator (MSDTC). By default, MSDTC requires port 135 and ports 1024 through 65535.
For more information about changing the default MSDTC ports, see the Microsoft Knowledge Base article
Configuring Microsoft Distributed Transaction Coordinator (DTC) to work through a firewall available at
https://support.microsoft.com/en-us/kb/250367
vRealize Automation does not support using SQL AlwaysOn groups due to its dependency on MSDTC.
Where possible, use an SQL Failover Cluster instance using a shared disk.
Data Collection Configuration
The default data collection settings provide a good starting point for most implementations. After
deploying to production, continue to monitor the performance of data collection to determine whether you
must make any adjustments.
Proxy Agents
For maximum performance, deploy agents in the same data center as the endpoint to which they are
associated. You can install additional agents to increase system throughput and concurrency. Distributed
deployments can have multiple agent servers that are distributed around the globe.
When agents are installed in the same data center as their associated endpoint, you can see an increase
in data collection performance of 200 percent, on average. The collection time measured includes only
the time spent transferring data between the proxy agent and the manager service. It does not include the
time it takes for the manager service to process the data.
For example, you currently deploy the product to a data center in Palo Alto and you have vSphere
endpoints in Palo Alto, Boston, and London. In this configuration, the vSphere proxy agents are deployed
in Palo Alto, Boston, and London for their respective endpoints. If instead, agents are deployed only in
Palo Alto, you might see a 200 percent increase in data collection time for Boston and London.
Reference Architecture
VMware, Inc. 9