PUBLISHED BY Microsoft Press A Division of Microsoft Corporation One Microsoft Way Redmond, Washington 98052-6399 Copyright © 2010 by Microsoft Corporation All rights reserved. No part of the contents of this book may be reproduced or transmitted in any form or by any means without the written permission of the publisher. Library of Congress Control Number: 2010925075 Printed and bound in the United States of America. 1 2 3 4 5 6 7 8 9 WCT 5 4 3 2 1 0 Distributed in Canada by H.B. Fenn and Company Ltd.
I dedicate this book to my wife and children, who make it all worthwhile. —Ross Mistry I dedicate this book to my husband and best friend, Gerry.
Contents at a Glance Introduction xvii PART I DATABASE ADMINISTRATION CHAPTER 1 SQL Server 2008 R2 Editions and Enhancements CHAPTER 2 Multi-Server Administration 21 CHAPTER 3 Data-Tier Applications 41 CHAPTER 4 High Availability and Virtualization Enhancements 63 CHAPTER 5 Consolidation and Monitoring 85 PART II BUSINESS INTELLIGENCE DEVELOPMENT CHAPTER 6 Scalable Data Warehousing 109 CHAPTER 7 Master Data Services 125 CHAPTER 8 Complex Event Processing with StreamInsight 145 C
Contents Introduction PART I xvii DATABASE ADMINISTRATION CHAPTER 1 SQL Server 2008 R2 Editions and Enhancements 3 SQL Server 2008 R2 Enhancements for DBAs. . . . . . . . . . . . . . . . . . . . . . . . . 3 Application and Multi-Server Administration Enhancements 4 Additional SQL Server 2008 R2 Enhancements for DBAs 8 Advantages of Using Windows Server 2008 R2 . . . . . . . . . . . . . . . . . . . . . . 10 SQL Server 2008 R2 Editions. . . . . . . .
Creating a UCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Creating a UCP by Using SSMS 26 Creating a UCP by Using Windows PowerShell 28 UCP Post-Installation Steps 29 Enrolling SQL Server Instances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
CHAPTER 4 High Availability and Virtualization Enhancements 63 Enhancements to High Availability with Windows Server 2008 R2. . . . . . 63 Failover Clustering with Windows Server 2008 R2. . . . . . . . . . . . . . . . . . . . 64 Traditional Failover Clustering 65 Guest Failover Clustering 67 Enhancements to the Validate A Configuration Wizard 68 The Windows Server 2008 R2 Best Practices Analyzer 71 SQL Server 2008 R2 Virtualization and Hyper-V. . . . . . . . . . . .
PART II BUSINESS INTELLIGENCE DEVELOPMENT CHAPTER 6 Scalable Data Warehousing 109 Parallel Data Warehouse Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Data Warehouse Appliances 109 Processing Architecture 110 The Multi-Rack System 110 Hub-and-Spoke Architecture 115 Data Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Integration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Importing Master Data 135 Exporting Master Data 136 Administration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Versions 137 Security 138 Model Deployment 142 Programmability. . . . . . . . . . . . . . . . .
CHAPTER 9 Reporting Services Enhancements 165 New Data Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Expression Language Improvements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 Combining Data from More Than One Dataset 166 Aggregation 168 Conditional Rendering Expressions 169 Page Numbering 170 Read/Write Report Variable 170 Layout Control. . . . . . . . . . . .
CHAPTER 10 Self-Service Analysis with PowerPivot 189 PowerPivot for Excel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 The PowerPivot Add-in for Excel 190 Data Sources 191 Data Preparation 193 PowerPivot Reports 196 Data Analysis Expressions 199 PowerPivot for SharePoint. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Acknowledgments I would like to first acknowledge Shirmattie Seenarine for assisting me on this title. I couldn't have written this book without your assistance in such a short timeframe with everything else going on in my life. Your hard work, contributions, edits, and perseverance are much appreciated. Thank you to fellow SQL Server MVP Kevin Kline for introducing me to the former SQL Server product group manager Matt Hollingsworth, who started the chain of events that led up to this book.
And last but not least, I would like to thank my Microsoft mentors who assisted with my career development and transition to the Microsoft Technology Center in Silicon Valley: Kelly Oliver, Alex Viera, Buck Woody, Kevin Tsai, and Burzin Patel. —Ross Mistry T he chapters of Part II covering the BI features of Microsoft SQL Server 2008 R2 are more complete and more accurate thanks to the efforts of the members of each product team who graciously participated in the review process.
Introduction O ur purpose in Introducing Microsoft SQL Server 2008 R2 is to point out both the new and the improved in the latest version of SQL Server. Because this version is Release 2 (R2) of SQL Server 2008, you might think the changes are relatively minor—more than a service pack, but not enough to justify an entirely new version.
are improved with the introduction of the SQL Server Utility Control Point. Stepby-step instructions show DBAs how to quickly designate a SQL Server instance as a Utility Control Point and enroll instances for centralized multi-server management. Chapter 3, “Data-Tier Applications,” focuses on how to streamline deployment and manage and upgrade database applications with the new data-tier application feature.
New” topic in SQL Server Books Online at http://msdn.microsoft.com/en-us /library/bb500435(SQL.105).aspx for the most up-to-date list of changes to the product. Be aware that you might also notice some minor differences between the RTM version of the product and the descriptions and screen shots that we provide. Support for This Book Every effort has been made to ensure the accuracy of this book.
PAR T I Database Administration Ross Mistry CHAPTER 1 SQL Server 2008 R2 Editions and Enhancements 3 CHAPTER 2 Multi-Server Administration CHAPTER 3 Data-Tier Applications CHAPTER 4 igh Availability and Virtualization H Enhancements 63 CHAPTER 5 Consolidation and Monitoring 21 41 85
CHAPTER 1 SQL Server 2008 R2 Editions and Enhancements M icrosoft SQL Server 2008 R2 is the most advanced, trusted, and scalable data platform released to date.
Microsoft has made major investments in the SQL Server product as a whole; however, the new features and breakthrough capabilities that should interest DBAs the most are the advancements in application and multi-server administration. This section introduces some of the new features and capabilities. Application and Multi-Server Administration Enhancements The SQL Server product group has made sizeable investments in improving application and multi-server management capabilities.
the total cost of ownership of their database environment. The new SQL Server Utility dashboards also assist with consolidation efforts. Figure 1-1 illustrates SQL Server Utility dashboard and viewpoints for providing superior insight into resource utilization and policy violations. FIGURE 1-1 Monitoring resource utilization with the SQL Server Utility dashboard and viewpoints ■ Consolidation management Organizations can maximize their investments by consolidating SQL Server resources onto fewer systems.
FIGURE 1-2 Identifying consolidation opportunities with the SQL Server Utility dashboard and viewpoints ■ Customization of utilization thresholds and policies DBAs can customize the utilization threshold and policies for managed instances of SQL Server and deployed data-tier applications to suit the needs of their environments.
environment. This can be done at scale, with information on resource utilization throughout the managed database environment, as a result of centralized visibility. In addition, a data-tier developer is building a data-tier application with Visual Studio 2010; the newly created DAC package will be deployed to a managed instance of SQL Server through the Utility Control Point.
In the example in Figure 1-4, a DBA has optimized hardware resources within the environment by modifying the global utilization policies to meet the needs of the organization. For example, the global CPU overutilization policies of a managed instance of SQL Server and computer have been configured to be overutilized when the utilization is greater than 85 percent.
spoke (control node and compute nodes) architecture. Performance improvements can be attained with Parallel Data Warehouse’s design approach because it partitions large tables over several physical nodes, resulting in each node having its own CPU, memory, storage, and SQL Server instance. This design directly eliminates issues with speed and provides scale because a control node evenly distributes data to all compute nodes.
PowerPivot data access in the farm. This new approach promises better integration with SharePoint while also enhancing SharePoint’s support of PowerPivot workbooks published to SharePoint. Chapter 10, “Self-Service Analysis with PowerPivot,” discusses PowerPivot for SharePoint. NOTE In order to use this new installation feature option, SharePoint 2010 must be installed but not configured prior to installing SQL Server 2008 R2.
■ Hyper-V improvements Building on the approval and success of the original Hyper-V release, Windows Server 2008 R2 delivers several new capabilities to the Hyper-V platform to further improve the SQL Server virtualization experience. First, availability can be stepped up with the introduction of Live Migration, which makes it possible to move SQL Server virtual machines (VMs) between Hyper-V hosts without service interruption.
Premium Editions The premium editions of SQL Server 2008 R2 are meant to meet the highest demands of large-scale datacenters and data warehouse solutions. The two editions are ■ Datacenter For the first time in the history of SQL Server, a datacenter edition is offered. SQL Server 2008 R2 Datacenter provides the highest levels of security, reliability, and scalability when compared to any other edition.
■ Standard SQL Server 2008 R2 Standard is a complete data management and BI platform that provides medium-class solutions for smaller organizations. It does not include all the bells and whistles included in Datacenter and Enterprise; however, it continues to offer best-in-class ease of use and manageability. Backup compression, which was an enterprise feature with SQL Server 2008, is now a feature included with the SQL Server 2008 R2 Standard.
■ Compact SQL Server 2008 R2 Compact is typically used to develop mobile and small desktop applications. It is free to use and is commonly redistributed with embedded and mobile independent software vendor (ISV) applications. NOTE Review “Features Supported by the Editions of SQL Server 2008 R2” at http://msdn.microsoft.com/en-us/library/cc645993(SQL.105).aspx for a complete comparison of the key capabilities of the different editions of SQL Server 2008 R2.
HARDWARE COMPONENT REQUIREMENTS Disk Space Database Engine: 280 MB Analysis Services: 90 MB Reporting Services: 120 MB Integration Services: 120 MB Client components: 850 MB SQL Server Books Online: 240 MB TABLE 1-2 Software Requirements SOFTWARE COMPONENT REQUIREMENTS Operating system Windows Server 2003 SP2 x64 Datacenter, Enterprise, or Standard edition or The 64-bit editions of Windows Server 2008 SP2 Datacenter, Datacenter without Hyper-V, Enterprise, Enterprise without Hyper-V, Standard, Stand
Installation, Upgrade, and Migration Strategies Like its predecessors, SQL Server 2008 R2 is available in both 32-bit and 64-bit editions, both of which can be installed either with the SQL Server Installation Wizard or through a command prompt. As was briefly mentioned earlier in this chapter, it is now also possible to use Sysprep in conjunction with SQL Server for automated deployments with minimal administrator intervention.
In-Place Upgrade Pros and Cons The in-place upgrade strategy is usually easier and considered less risky compared to the side-by-side migration strategy. Upgrading is also fairly quick, and additional hardware is not required. Because the names of the server and instances do not change during the upgrade process, applications still point to the old instances. As a result, this strategy is less time consuming, because there is no need to make changes to application connection strings.
Side-by-Side Migration The term side-by-side migration describes the deployment of a brand-new SQL Server 2008 R2 instance alongside a legacy SQL Server instance. When the SQL Server 2008 R2 installation is complete, a DBA migrates data from the legacy SQL Server database platform to the new SQL Server 2008 R2 database platform. Side-by-side migration is depicted in Figure 1-6. NOTE It is possible to conduct a side-by-side migration to SQL Server 2008 R2 by using the same server.
However, there are disadvantages to the side-by-side strategy. Additional hardware might need to be purchased. Applications might also need to be directed to the new SQL Server 2008 R2 instance, and it might not be a best practice for very large databases because of the duplicate amount of storage that is required during the migration process.
CHAPTER 2 Multi-Server Administration O ver the years, an increasing number of organizations have turned to Microsoft SQL Server because it embodies the Microsoft Data Platform vision to help organizations manage any data, at any place, and at any time. The biggest challenges organizations face with this increase of SQL Server installations have been in management.
SQL Server Utility. It forms a collection of managed instances with a repository for performance data and management policies. After data is collected from managed instances, Utility Explorer and SQL Server Utility dashboard and viewpoints in SQL Server Management Studio (SSMS) provide administrators with a view of SQL Server resource health through policy evaluation and analysis of trending instances and applications throughout the enterprise.
REAL WORLD M any organizations that participate in the Microsoft SQL Server early adopter program are currently either evaluating SQL Server 2008 R2 or already using it in their production infrastructure. The consensus is that organizations should design a SQL Server Utility solution that factors in a SQL Server Utility with every deployment. The SQL Server Utility allows you to increase visibility and control, optimize resources, and improve overall efficiencies within your SQL Server infrastructure.
■ The Utility Explorer user interface A component of SSMS, this interface provides a hierarchical tree view for managing and controlling the SQL Server Utility. Its uses include connecting to a utility, creating a UCP, enrolling instances, deploying data-tier applications, and viewing utilization reports affiliated with managed instances and data-tier applications. You launch Utility Explorer from SSMS by selecting View and then choosing Utility Explorer.
UCP Prerequisites As with other SQL Server components and features, the deployment of a SQL Server UCP must meet the following specific prerequisites and requirements: ■ ■ ■ ■ ■ The SQL Server version running the UCP must be SQL Server 2008 R2 or higher. (SQL Server 2008 R2 is also referred to as version 10.5.) The SQL Server 2008 R2 edition must be Datacenter, Enterprise, Evaluation, or Developer. The SQL Server system running the UCP must reside within a Windows Active Directory domain.
Creating a UCP The UCP is relatively easy to set up and configure. You can deploy it either by using the Create Utility Control Point Wizard in SSMS or by leveraging Windows PowerShell scripts. The high-level steps for creating a UCP include specifying the instance of SQL Server in which the UCP will be created, choosing the account to run the utility control set, ensuring that the instance is validated and passes the conditions test, reviewing the selections made, and finalizing the UCP deployment.
5. On the Specify The Instance Of SQL Server page, click the Connect button to specify the instance of SQL Server in which the new UCP will be created, and then click Connect in the Connect To Server dialog box. 6. Specify a name for the UCP, as illustrated in Figure 2-3, and then click Next to continue.
8. On the next page, the SQL Server instance is compared against a series of prerequisites before the UCP is created. Failed conditions are displayed in a validation report. Correct all issues, and then click the Rerun Validation button to verify the changes against the validation rules. To save a copy of the validation report for future reference, click Save Report, and then specify a location for the file. To continue, click Next.
UCP Post-Installation Steps When the Create Utility Control Point Wizard is closed, the Utility Explorer is invoked, and you are automatically connected to the newly created UCP. The UCP is automatically enrolled as a managed instance. The data collection process also commences immediately. The dashboards, status icons, and utilization graphs associated with the SQL Server Utility display meaningful information after the data is successfully uploaded.
Managed Instance Enrollment Prerequisites As with many of the other tasks in this chapter, certain conditions must be satisfied to successfully enroll an instance: ■ You must have administrator privileges on the instance of SQL Server. ■ The instance of SQL Server must be SQL Server 2008 R2 or higher. ■ The SQL Server edition must support instance enrollment. ■ The instance of SQL Server cannot be enrolled with any other UCP. ■ The instance of SQL Server cannot already be a UCP.
8. As shown in Figure 2-4, a series of conditions will be evaluated against the SQL Server instance to ensure that it passes all of the prerequisites before the instance is enrolled. If there are any failures preventing the enrollment of the SQL Server instance, correct them and then click Rerun Validation. To save the validation report, click Save Report and specify a location for the file. Click Next to continue. FIGURE 2-4 The SQL Server Instance Validation screen 9.
Enrolling SQL Server Instances by Using Windows PowerShell Windows PowerShell can also be used to enroll instances. In fact, scripting may be the way to go if there is a need to enroll a large number of instances into a SQL Server UCP. Let’s say you need to enroll 200 instances, for example. Using the Enroll Instance Wizard in SSMS can be very time consuming, because the wizard is a manual process in which you can enroll only one instance at a time.
FIGURE 2-5 The Managed Instances dashboard Managing Utility Administration Settings After you are connected to a UCP, use the Utility Administration node in the Utility Explorer navigation pane to view and configure global policy settings, security settings, and data warehouse settings across the SQL Server Utility. The configuration tabs affiliated with the Utility Administration node are the Policy, Security, and Data Warehouse tabs.
3. On the Utility Explorer toolbar, click the Connect To Utility icon. 4. In the Connect To Server dialog box, specify a UCP instance, and then click Connect. 5. After you are connected, you can deploy data-tier applications, manage instances, and configure global settings. NOTE It is not possible to connect to more than one UCP at the same time.
FIGURE 2-6 Modifying global policies for managed instances Volatile Resource Policy Evaluation The final section on the Policy tab is Volatile Resource Policy Evaluation. This section, displayed in Figure 2-7, provides strategies to minimize unnecessary reporting noise and unwanted violation reporting in the SQL Server Utility. You can choose how frequently the CPU utilization policies can be in violation before reporting the CPU as overutilized.
FIGURE 2-7 Volatile resource policy evaluation The next set of configurable elements allows you to determine how frequently CPU utilization polices should be in violation before the CPU is reported as being underutilized. The default evaluation period for processor underutilization is 1 week. Options range from 1 day to 1 month. The default percentage of data points that must be in violation before a CPU is reported as being underutilized is 90 percent. You can choose between 0 percent and 100 percent.
The Security Tab From a security and authorization perspective, there are two security roles associated with a UCP. The first role is the Utility Administrator, and the second role is the Utility Reader. The Utility Administrator is ultimately the “superuser” who has the ability to manage any setting or view any dashboard or viewpoint associated with the UCP. For example, a Utility Administrator can enroll instances, manage settings in the Utility Administration node, and much more.
3. On the General page, enter the name of a Windows user in the Login Name box. 4. Select Windows Authentication. 5. Click OK. NOTE Unlike in the previous example, do not assign this user the sysadmin role on the Server Role page. If you do, the user will automatically become a Utility Administrator and not a Utility Reader on the UCP. 6. In Utility Explorer, connect to the UCP instance in which you created the login (SQL2K8R2-01\Test2). 7.
REAL WORLD M any organizations have large teams managing their SQL Server infrastructures because they have hundreds of SQL Server instances within their environ- ment. Let’s say you wanted to grant 50 users the read-only privilege for the SQL Server Utility dashboard and viewpoints. It would be very impractical to grant every single database administrator the read-only privilege.
4. In the Utility Explorer Content pane, select the desired data retention period for the UMDW, as displayed in Figure 2-9. The options are 1 month, 3 months, 6 months, 1 year, or 2 years. FIGURE 2-9 Configuring the data retention period 5. 40 Click the Apply button to save the changes. Alternatively, click the Discard Changes or Restore Defaults buttons as needed.
CHAPTER 3 Data-Tier Applications A sk application developers or database administrators what it was like to work with data-driven applications in the past, and most probably do not use adjectives such as “easy,” “enjoyable,” or “wonderful” when they describe their experience. Indeed, the development, deployment, and even the management of data-driven applications in the past were a struggle.
The Data-Tier Application Life Cycle There are two common methods for generating a DAC. One is to author and build a DAC using a SQL Server data-tier application project in Microsoft Visual Studio 2010. In the second method, you can extract a DAC from an existing database by using the Extract Data-Tier Application Wizard in SQL Server Management Studio. Alternatively, a DAC can be generated with Windows PowerShell commands.
Data-tier developers using a data-tier application project template in Visual Studio 2010 first build a DAC and then deploy the DAC package to an instance of SQL Server 2008 R2. In contrast, database administrators using the Extract Data-Tier Application Wizard in SQL Server Management Studio generate a DAC from an existing database. The DAC package is then deployed to a SQL Server 2008 R2 instance.
Real World O rganizations looking to accelerate and standardize deployment of database applications within their database environments should leverage data-tier applications included in SQL Server 2008 R2. By utilizing data-tier applications, an organization captures intent and produces a single deployment package, providing a more reliable and consistent deployment experience than ever before.
■ Type: User-defined Data Type ■ Type: User-defined Table Type ■ User ■ View Database administrators do not have to worry about looking for unsupported objects. This laborious task is accomplished with the Extract Data-Tier Application Wizard. Unsupported objects such as DDL triggers, service broker objects, and full-text catalog objects are identified and reported by the wizard. Unsupported objects are identified with a red icon that represents an invalid entry.
4. In the Project Template pane, select Data-Tier Application. 5. Specify the name, location, and solution name for the data-tier application, as shown in Figure 3-2, and click OK. FIGURE 3-2 Selecting the Data-Tier Application project template in Visual Studio 2010 6. Select Project, and then click Add New Item to add and create a database object based on the Data-Tier Application project template.
FIGURE 3-3 The Create Table schema and the Solution Explorer pane in a Visual Studio 2010 DAC project Importing an Existing Data-Tier Application Project into Visual Studio 2010 Instead of creating a DAC from the ground up in Visual Studio 2010, a data-tier developer can choose to import an existing data-tier application into Visual Studio 2010 and then either edit the DAC or completely reverse-engineer it.
4. Review the information on the Welcome page, and then click Next. 5. On the Specify Import Options page, select the option that allows you to import from a data-tier application package. 6. Click the Browse button, and navigate to the folder in which you placed the .dacpac to import. Select the file, and then click Open. Click Next to continue. 7. Review the report that shows the status of the import actions, as illustrated in Figure 3-4, and then click Finish.
Extracting a Data-Tier Application with SQL Server Management Studio The Extract Data-Tier Application Wizard is another tool that you can use for creating a new data-tier application. The wizard is in SQL Server 2008 R2 Management Studio. In this method, the wizard works its way into an existing SQL Server database, reads the content of the database and the logins associated with it, and ensures that the new data-tier application can be created.
■ Description This property is optional. Use it to describe the DAC. If this section is completed, the information is saved in the msdb database under the data-tier applications node in Management Studio. FIGURE 3-5 Specifying DAC properties when using the Extract Data-Tier Application Wizard 50 6. Next, indicate where the DAC package file is to be saved. Remember to use the appropriate extension, .dacpac.
confirms that the information is supported by the DAC, and displays DAC object issues, DAC object warnings, and DAC objects that are supported. If there are no issues, click Next to continue. You also have the option to click Save Report to capture the entire report. FIGURE 3-6 The Extract Data-Tier Application Wizard's Validation And Summary page NOTE The Next button is disabled on the Validation And Summary page if one or more objects are not supported by the DAC.
Installing a New DAC Instance with the Deploy Data-Tier Application Wizard After the DAC package has been created using the data-tier application project template in Visual Studio 2010, the Extract Data-Tier Application Wizard in SQL Server Management Studio, or Windows PowerShell commands, the next step is to deploy the DAC package to a Database Engine instance running SQL Server 2008 R2. This can be achieved by using the Deploy Data-Tier Application Wizard located in SQL Server Management Studio.
FIGURE 3-7 Specifying a DAC package to deploy with the Deploy Data-Tier Application Wizard NOTE If a database with the same name already exists on the instance of SQL Server, the wizard cannot proceed. 7. The wizard then analyzes the DAC package to ensure that it is valid. If the DAC package is valid, the Update Configuration page is automatically invoked. Otherwise, an error is displayed. You need to address the error(s) and start over again. 8.
9. The next page includes a summary of the settings that are used to deploy the data-tier application. Review the information displayed in the Summary page and DAC properties tree to ensure that the actions taken are correct, and then click Next to continue. 10. The Deploy DAC page, shown in Figure 3-8, includes results such as success or failure based on each action performed during the deployment process.
NOTE Throughout this chapter, you can also use Windows PowerShell scripts in con- junction with data-tier applications to do many of the tasks discussed, such as ■ Creating data-tier applications. ■ Creating server objects. ■ Loading DAC packages from a file. ■ Upgrading data-tier applications. ■ Deleting data-tier applications.
5. On the Set Properties page, complete the DAC properties by typing in the application name, version, and description, as described here: ■ ■ ■ Application name This refers to the name of the DAC. This value cannot be altered and is always identical to the name of the database. Version The DAC version identification helps developers working in Visual Studio identify the version in which they are currently working.
■ Delete Database The DAC metadata and the associated database are dropped. The data and log files are deleted. Logins are not removed. To delete the DAC, follow these steps: 1. In Object Explorer, connect to a SQL Server instance containing the data-tier application you plan to delete. 2. Expand the SQL Server instance, and then expand the Management folder. 3.
6. Review the information displayed in the Summary page, as shown in Figure 3-10. FIGURE 3-10 Viewing the Summary page when deleting a DAC Ensure that the application name, database name, and delete method are correct. If the information is correct, click Next to continue. 7. 58 On the Delete DAC page, take a moment to review the information. This page communicates which actions failed or succeeded. Unsuccessful actions have a link next to them in the Result column.
Upgrading a Data-Tier Application Let us recall the past for a moment, when updating changes to existing database schemas and database applications was a noticeably challenging task. Database administrators usually created scripts that included the new or updated database schema changes to be deployed. The other option was to use third-party tools. Both processes could be expensive, time consuming, and challenging to manage from a release or build perspective.
6. On the Select Package page, specify the DAC package that contains the new DAC version to upgrade to. Alternatively, you can use the Browse button to specify the location of the DAC package. When the DAC package is selected, you can verify the DAC details, such as the application name, version number, and description in the read-only text boxes. IMPORTANT 7. Ensure that the DAC package and the original DAC have the same name.
NOTE If the database has changed, it is a best practice to review the potential data losses before you proceed and verify that this is the outcome you want for the upgraded database. However, the original database is still preserved, renamed, and maintained on the SQL Server instance. Any data changes can be migrated from the original database to the new database after the upgrade is complete. 8. The next page includes a summary of the settings that will be used to upgrade the data-tier application.
FIGURE 3-12 Reviewing the result information on the Upgrade DAC page NOTE Data-tier applications are a large and intricate subject. See the following sources for more information: ■ “Designing and Implementing Data-tier Applications” at http://msdn.microsoft.com /en-us/library/ee210546(SQL.105).aspx ■ “Creating and Managing Data-tier Applications” at http://msdn.microsoft.com /en-us/library/ee361996(VS.100).aspx ■ ■ Tutorials at http://msdn.microsoft.com/en-us/library/ee210554(SQL.105).
CHAPTER 4 High Availability and Virtualization Enhancements M icrosoft SQL Server 2008 R2 delivers several enhancements in the areas of high availability and virtualization. Many of the enhancements are affiliated with the Windows Server 2008 R2 operating system and the Hyper-V platform.
■ Windows Server 2008 R2 Hyper-V The Hyper-V virtualization technology improvements in Windows Server 2008 R2 were the most sought-after and anticipated enhancements for Windows Server 2008 R2. It is now possible to virtualize heavy SQL Server workloads because Windows Server 2008 R2 scales far beyond its predecessors.
including Internet Information Services (IIS), Cluster Server, SQL Server 7.0 Enterprise Edition, Microsoft Distributed Transaction Coordinator (MSDTC) 2.0, and sometimes the Windows NT 4.0 Option Pack. Moreover, the hardware support, driver support, and documentation were not as forthcoming as they are today. Many IT organizations came to believe that failover clustering was a difficult technology to install and maintain.
Figure 4-1 illustrates a two-node single-instance failover cluster running SQL Server on Windows Server 2008 R2. Public Network SQL Cluster\Instance01 Node1 Node2 Heartbeat Network SAN Storage FIGURE 4-1 A two-node single-instance failover cluster Figure 4-2 illustrates a multiple-instance failover cluster running SQL Server on Windows Server 2008 R2.
Guest Failover Clustering In the past, physical servers were usually affiliated with the nodes in a failover cluster. Today, virtualization technologies make it possible to form a cluster with each node being a guest operating system on virtual servers. This is known as guest failover clustering.
NOTE Guest clustering is also supported when Hyper-V is on Windows Server 2008. However, Windows Server 2008 R2 provides Live Migration for moving virtual machines between physical hosts. This is much more beneficial for a virtualized environment running SQL Server 2008 R2. Real World W hen you use guest failover clustering, make sure that the virtualized guest operating systems used for the nodes in the guest failover cluster are not on the same physical Hyper-V host.
• • • • • • ■ List Cluster Services And Applications Validate Quorum Configuration Validate Resource Status Validate Service Principal Name Validate Volume Consistency Network • • ■ List Cluster Volumes List Network Binding Order Validate Multiple Subnet Properties System Configuration • • • NOTE Validate Cluster Service And Driver Settings Validate Memory Dump Settings Validate System Drive Variable The wizard tests configurations and also lists information.
6. On the Confirmation page, review the details for each test, and then click Next to begin the validation process. While the validation process is running, status information is continually displayed on the Validating page until all tests are complete. After all tests are complete, the Summary page is displayed, as shown in Figure 4-4. It includes the results of the validation tests and numerous details about the information collected during each test.
The Windows Server 2008 R2 Best Practices Analyzer Another tool available in Windows Server 2008 R2 is a server management tool referred to as the Best Practices Analyzer (BPA). The BPA determines how compliant a server role is by comparing it against best practices in eight categories: security, performance, configuration, policy, operation, pre-deployment, post-deployment, and BPA prerequisites. In each category, the effectiveness, trustworthiness, and reliability of a role is taken into consideration.
SQL Server 2008 R2 Virtualization and Hyper-V Virtualization is one of the hottest topics of discussion in almost every SQL Server architecture design session or executive briefing session, mainly because organizations are beginning to understand the immediate and long-term benefits virtualization can offer them.
Hyper-V01 Hyper-V03 Hyper-V02 Hyper-V04 C:\ClusterShares\Volume1 VHD VHD VHD VHD C:\ClusterShares\Volume2 VHD VHD VHD VHD FIGURE 4-5 A Hyper-V cluster and Live Migration Windows Server 2008 R2 Hyper-V System Requirements Table 4-1 below outlines the minimum requirements, along with the recommended system configuration, for using Hyper-V on Windows Server 2008 R2.
NOTE System requirements vary based on an organization's virtualization requirements. Organizations should size their workloads to ensure that the Hyper-V hosts can successfully accommodate all of the virtual servers and associated workloads from a CPU, memory, and disk perspective. Practical Uses for Hyper-V and SQL Server 2008 R2 Hyper-V on Windows Server 2008 R2 is capable of accomplishing almost the same successes as dedicated servers, including the same kinds of peak load handling and security.
NOTE The Microsoft Assessment and Planning Toolkit can be used to identify whether or not an organization’s SQL Server systems are good candidates for virtualization. The toolkit also includes tools for SQL Server inventory, assessments, and intuitive reporting. A download of the Microsoft Assessment and Planning Toolkit is available on the Microsoft Download Center at http://www.microsoft.com/downloads/details.aspx?FamilyID=67240b763148-4e49-943d-4d9ea7f77730&displaylang=en.
Enabling CSV Assuming that the Hyper-V cluster has already been built, the next step is enabling CSV in Failover Cluster Manager. Follow the steps in this section to enable CSV on a Hyper-V failover cluster running on Windows Server 2008 R2. 1. On a server in the Hyper-V failover cluster, click Start, click Administrator Tools, and then click Failover Cluster Manager. 2. In the Failover Cluster Manager snap-in, verify that CSV is present for the cluster that is being enabled.
5. On the Specify Name And Location page, enter the name of the SQL Server VM and specify where it will be stored. For example, the name SQLServer2008R2-VM01 and the VM can be stored on Cluster Shared Volume 1, as displayed in Figure 4-6. FIGURE 4-6 The Specify Name And Location Screen when a new virtual machine is being created NOTE If a folder is not selected, the SQL Server VM is stored in the default folder con- figured for the Hyper-V server. 6.
7. On the Networking page, connect the network adapter to an existing virtual network by selecting the appropriate network adapter from the menu. Click Next to continue. 8. On the Connect Virtual Hard Disk page, as shown in Figure 4-7, specify the name, location, and size to create a virtual hard disk so that you can install an operating system. Click Next to continue. FIGURE 4-7 The Connect Virtual Hard Disk page when a new virtual machine is being created 9.
11. From the Virtual Machines section of the results pane in Hyper-V Manager, rightclick the name of the SQL Server VM you just created, and click Connect. The Virtual Machine Connection tool opens. 12. In the Action menu in the Virtual Machine Connection window, click Start. 13. Follow the prompts to install the Windows Server 2008 R2 operating system. 14. When the operating system installation is complete, install SQL Server 2008 R2.
4. Under Automatic Start Action, for the What Do You Want This Virtual Machine To Do When The Physical Computer Starts? question, select Nothing, as shown in Figure 4-8. Then click Apply and OK. FIGURE 4-8 Configuring the Automatic Start Action Setting screen 5. Launch Failover Cluster Manager from Administrative Tools on the Start menu. 6. In the Failover Cluster Manager snap-in, if the cluster that will be configured is not displayed in the console tree, right-click Failover Cluster Manager.
11. On the Select Service Or Application page, shown in Figure 4-9, click Virtual Machine, and then click Next. FIGURE 4-9 Selecting the service and application for high availability 12. On the Select Virtual Machine page, shown in Figure 4-10, confirm the name of the VM you plan to make highly available. In this example, SQLServer2008R2-VM01 is used. Click Next.
NOTE To make a VM highly available, you must ensure that it is not running. It must be either turned off or shut down. 13. Confirm the selection, and then click Next. 14. The wizard configures the VM for high availability and provides a summary. To view the details of the configuration, click View Report. To close the wizard, click Finish. 15.
Initiating a Live Migration of a SQL Server VM After an administrator has enabled CSV, created a SQL Server 2008 R2 VM, configured the automatic start option, and made the VM highly available, it is time to initiate a live migration. Perform the following steps to initiate Live Migration: 1. In the Failover Cluster Manager snap-in, if the cluster to be configured is not displayed in the console tree, right-click Failover Cluster Manager. 2. Click Manage A Cluster, and then select or specify the cluster.
CHAPTER 5 Consolidation and Monitoring T oday's competitive economy dictates that organizations reduce cost and improve agility in their database environments. This means the large percentage of organizations out there running underutilized Microsoft SQL Server installations must take control of their environments in order to experience significant cost savings and increased activity.
Consolidating Databases and Instances A very common SQL Server consolidation strategy involves placing many databases on a single instance of SQL Server. This approach offers organizations improved operations through centralized management, standardization, and improved performance. For example, multiple databases belonging to the same SQL Server instance facilitates shared memory optimization, and database consolidation helps to reduce overhead due to fixed resource costs per instance.
SQLInstance03 SQLInstance02 SQLInstance01 FIGURE 5-2 Consolidating many databases onto a single physical host running three instances of SQL Server SQLInstance02 SQLInstance01 SQLInstance03 FIGURE 5-3 Consolidating many databases onto multiple physical hosts running multiple instances of SQL Server Consolidating SQL Server Through Virtualization Another SQL Server consolidation strategy attracting interest is virtualization.
and 64-bit versions within a single host. In addition, physical SQL Servers can easily be virtualized by using the physical-to-virtual (P2V) migration tool included with System Center Virtual Machine Manager 2008 R2. Figure 5-4 illustrates a consolidation strategy in which many databases, instances, and physical SQL Server systems are virtualized on a single Hyper-V host.
Using the SQL Server Utility for Consolidation and Monitoring The SQL Server Utility is the center of operations for monitoring managed instances of SQL Server, databases, and deployed data-tier applications. By using the dashboards and viewpoints included in the SQL Server Utility, DBAs can proactively monitor and view resource utilization, health state, and health policies for managed instances, databases, and deployed data-tier applications at scale.
The three main dashboards for monitoring and managing resource utilization and consolidation efforts are discussed in the next sections. These dashboards and viewpoints are ■ The SQL Server Utility dashboard. ■ The Managed Instance viewpoint. ■ The Data-Tier Applications viewpoint.
The SQL Server Utility dashboard includes the following information: ■ Utility Summary Found in the center of the top row of the Utility Explorer Content pane, this section is the first place to look. It displays the number of managed instances of SQL Server and the number of deployed data-tier applications managed by the SQL Server Utility. Use the Utility Summary section to gain quick insight into the number of objects being managed by the SQL Server Utility.
● No Data Available Either data has not been uploaded from a managed instance or there is a problem with the collection and upload process. By viewing the Managed Instance Health section, DBAs are able to quickly obtain an overview of resource utilization across all managed instances within the utility.
■ Data-Tier Application Health This section is located in the top-right corner of the Utility Explorer Content pane. Health status is illustrated in a pie chart and has four possible designations: ● ● Well Utilized The number of deployed data-tier applications that are not violating resource utilization policies is displayed. Overutilized The number of deployed data-tier applications that are violating resource overutilization policies is displayed.
● Overutilized Database Files This represents the number of deployed data-tier applications with database files that are violating file space overutilization policies. ● ● Overutilized Storage Volumes This represents the number of deployed datatier applications with database files on storage volumes that are violating file space overutilization policies.
This section explained how to obtain summary information for all managed instances of SQL Server. DBAs seeking more information might be interested in the Managed Instances node in the tree view of Utility Explorer. This node helps database administers gain deeper knowledge of health status and resource utilization data for each managed instances of SQL Server. The next section discusses this dashboard.
Resource utilization for each managed instance of SQL Server is presented in the list view located at the top of the Utility Explorer Content pane. Health state icons appear to the right of each managed instance and provide summary status for each instance of SQL Server based on the utilization category. Three icons are used to indicate the health of each managed instance of SQL Server. A green check mark indicates that an instance is well utilized and does not violate any policies.
■ Volume Space Volume space utilization is summarized in this column for volumes with databases belonging to each managed instance. The health of this parameter is determined by the global or local storage volume utilization policies for managed instances of SQL Server. As with file space reports, the health of a storage volume associated with a managed instance of SQL Server that is overutilized is reported with a red up arrow, and underutilization is reported with a green arrow.
FIGURE 5-8 The Storage Utilization tab on the Managed Instances viewpoint Independent of how the files are grouped, health status is communicated for every data base, filegroup, database file, or volume. For example, the green arrows in Figure 5-8 indicate that all databases, filegroups, and data files are underutilized. No health states are shown as overutilized.
FIGURE 5-9 The Policy Details tab on the Managed Instances viewpoint NOTE To override the global policy for a specific managed instance, select the Over- ride The Global Policy option button. Next, specify the new overutilized and underutilized numeric values in the control boxes to the right of the policy description, and then click Apply. For example, in Figure 5-9, the default global policy for the CPU of a managed instance is to consider the CPU overutilized when its usage is greater than 70 percent.
FIGURE 5-10 The Property Details tab on the Managed Instances viewpoint Using the Data-Tier Application Viewpoint As it is when you use the Managed Instances viewpoint to monitor health status and resource utilization for managed instances of SQL Server, using the Data-Tier Applications viewpoint enables you to monitor deployed data-tier applications managed by the SQL Server Utility Control Point.
FIGURE 5-11 The data-tier application viewpoint Resource utilization for each deployed data-tier application is presented in the list view located at the top of the Utility Explorer Content pane. Health state icons appear at the right of each deployed data-tier application and provide summary status for each deployed data-tier application based on the utilization category. Three icons are used to indicate the health state of each deployed data-tier application.
The Data-Tier Application List View The columns presenting the state of health for each deployed data-tier application in the data-tier application list view include ■ Application CPU This column displays the health state utilization of the processor for the deployed data-tier application. The health state is determined by the CPU utilization policy for deployed data-tier applications. The CPU Utilization tab shows CPU utilization history for the selected deployed data-tier application.
Two linear graphs are presented next to each other. The first graph shows CPU utilization based on the selected deployed data-tier application, and the second graph displays data based on the computer associated with the deployed data-tier application. ■ Storage Utilization The next tab displays storage utilization for a selected deployed data-tier application, as depicted in Figure 5-12. Data is grouped by either filegroup or volume.
■ Policy Details The Policy Details tab, shown in Figure 5-13, is where a DBA can view the global policies applied to a selected deployed data-tier application. The Policy Details tab can also be used to create a custom policy that overrides the default global policy applied to a deployed data-tier application. For example, by expanding the Data-Tier Application CPU Utilization Policies section, you can observe that the global policy is applied.
The display is broken up into the following four policies, which can be viewed or overridden: ■ ● Data-Tier Application CPU Utilization Policies ● File Space Utilization Policies ● Computer CPU Utilization Policies ● Storage Volume Utilization Policies Property Details The Property Details tab, shown in Figure 5-14, displays generic property details for the selected deployed data-tier application.
PAR T II Business Intelligence Development Stacia Misner 109 CHAPTER 6 Scalable Data Warehousing CHAPTER 7 Master Data Services CHAPTER 8 Complex Event Processing with StreamInsight 145 CHAPTER 9 Reporting Services Enhancements CHAPTER 10 Self-Service Analysis with PowerPivot 125 165 189
CHAPTER 6 Scalable Data Warehousing M icrosoft SQL Server 2008 R2 Parallel Data Warehouse is an enterprise data warehouse appliance based on technology originally created by DATAllegro and acquired by Microsoft in 2008. In the months following the acquisition, Microsoft revamped the product by changing it from a product that used the Linux operating system and Ingres database technologies to a product based on SQL Server 2008 R2 and the Windows Server 2008 operating system.
Warehouse software. When the assembly process is complete, the vendor ships the appliance to you using shockproof pallets. When it arrives, you remove the appliance from the pallets, plug it into a power source, and connect it to your network. Parallel Data Warehouse is a data warehouse appliance that includes all server, networking, and storage components required to host a data warehouse. In addition, your purchase of Parallel Data Warehouse includes cables, power distribution units, and racks.
Control rack Data rack Management node active/passive Control rack Active server Dedicated storage SQL Control node active/passive User queries SQL SQL SQL SQL SQL Landing Zone SQL Data loading Backup node Data backup Dual InfiniBand Dual Fibre Channel Passive server SQL FIGURE 6-1 The multi-rack system The Data Rack All activity related to parallel query processing occurs in the data rack, which is a collection of compute nodes.
The Control Rack The control rack is a separate rack that houses the servers, storage, and networking components for the nodes that provide control, management, or interface functions. It contains several types of nodes that Parallel Data Warehouse uses to process user queries, to load and back up data, and to manage the appliance. Some of the nodes serve as intermediaries between the corporate network and the private network that connects the nodes in both the control rack and data rack.
Development Studio, SQL Server Integration Services, SQL Server Analysis Services, and SQL Server Reporting Services. The Nexus client is the query editor that you can use to submit queries by using SQL statements to Parallel Data Warehouse. Parallel Data Warehouse also includes DWSQL, a command-line tool for submitting SQL statements to the control node. These client tools use Data Direct’s SequeLink client drivers that support the following data access driver types: ■ ODBC ■ OLE DB ■ ADO.
The Landing Zone Node The Landing Zone is a high-capacity data storage node in the control rack that contains terabytes of disk space for temporary storage of user data before loading it into the appliance. Using your ETL processes to move data to the Landing Zone, you can either copy data to the Landing Zone and then load it into the appliance, or you can load data directly without first storing it on the Landing Zone.
any necessary data to each compute node so that it can process the query in parallel with other compute nodes without requiring data from other locations during processing. This feature, called data colocation, ensures that each compute node can execute its portion of the parallel query with no effect on the query performance of the other compute nodes.
You design the data layout on the appliance to avoid or minimize data movement for parallel queries by using either a replicated or a distributed strategy for storage. When planning which strategy to implement, you consider the types of joins that the parallel queries require. Some tables require a replicated strategy, whereas others require a distributed strategy.
nodes in the appliance. There are performance considerations for the selection of a distribution column, such as distinctness, data skew, and the types of queries executed on the system. For a detailed discussion of the choice of distributed tables, refer to the product documentation. To distribute the rows in the fact table, a hash function assigns each row to one of many storage locations based on the distribution column.
CREATE DATABASE The CREATE DATABASE statement has a set of options for supporting distributed and replicated tables. You determine how much space you need in total for the database for replicated tables, distributed tables, and logs. Parallel Data Warehouse manages the database according to your specifications.
Parallel Data Warehouse does not use the Transact-SQL partition schema or parti- NOTE tion function. Also, you can create a clustered index only when you use CREATE TABLE. To create a nonclustered index, you use CREATE INDEX.
In addition, you can use a CREATE TABLE AS SELECT statement to create a table from the results of a SELECT statement. You might use this technique when you are redistributing or defragmenting a table.
Query Processing Query processing in Parallel Data Warehouse is more complex than in an SMP data warehouse because processing must manage high availability, parallelization, and data movement between nodes. In general, Parallel Data Warehouse’s control node follows these steps to process a query (shown in Figure 6-5): 1. Parse the SQL statement. 2. Validate and authorize the objects. 3. Build a distributed execution plan. 4. Run the execution plan. 5. Aggregate query results. 6.
Data Warehouse’s Landing Zone. You then invoke a command-line tool, DWLoader, and specify options to load the data into the appliance. Or you can use Integration Services to move data to the Landing Zone and call the loading functionality directly. To load small amounts of data, you can connect to the control node and use the SQL INSERT statement. Queries can run concurrently with load processing, so your data warehouse is always available during ETL processing.
Business Intelligence Integration Parallel Data Warehouse integrates with the SQL Server business intelligence (BI) components—Integration Services, Reporting Services, and SQL Server Analysis Services. Integration Services Integration Services is the ETL component of SQL Server. You use Integration Services packages to extract and merge data from multiple data sources and to filter and cleanse your data before loading it into the data warehouse.
CHAPTER 7 Master Data Services M icrosoft SQL Server 2008 R2 Master Data Services (MDS) is another new technology in the SQL Server family and is based on software from Microsoft’s acquisition of Stratature in 2007. Just as SQL Server Reporting Services (SSRS) is an extensible reporting platform that ships with ready-to-use applications for end users and administrators, MDS is both an extensible master data management platform and an application for developing, managing, and deploying master data models.
This scenario presents additional problems for operational master data in an organization because there is no coordination across multiple systems. Business users cannot be sure which of the many available systems has the correct information.
In addition to offering flexibility, MDS allows you to manage master data proactively. Instead of discovering data problems in failed ETL processes or inaccurate reports, you can engage business users as data stewards. As data stewards, they have access to Master Data Manager, a Web application that gives them ownership of the processes that identify and react to data quality issues.
Master Data Services Configuration Manager Before you can start using MDS to manage your master data, you use Master Data Services Configuration Manager. This configuration tool includes pages to create the MDS database, configure the system settings for all Web services and applications that you associate with that database, and configure the Master Data Services Web application.
■ Integration Management Use this area to create and process batches for importing data from staging tables into the MDS database, view errors arising from the import process, and create subscription views for consumption of master data by operational and analytic applications. ■ ■ System Administration Use this area to create a new model and its entities and attributes, define business rules, configure notifications for failed data validation, and deploy a model to another system.
An entity can also have any number of domain-based attributes whose values are members of another related entity. In the example in Figure 7-1, the ProductSubCategory attribute is a domain-based attribute. That is, the ProductSubCategory codes are attribute values in the Product entity, and they are also members of the ProductSubCategory entity. A third type of attribute is the file attribute, which you can use to store a file or image. You have the option to organize attributes into attribute groups.
FIGURE 7-3 A collection Master Data Maintenance Master Data Manager is more than a place to define model objects. It also allows you to create, edit, and update leaf members and consolidated members. When you add a leaf member, you initially provide values for only the Name and Code attributes, as shown in Figure 7-4. You can also use a search button to locate and select the parent consolidated member in each hierarchy.
FIGURE 7-5 Attributes and validation issues Business Rules One of the goals of a master data management system is to set up data correctly once and to propagate only valid changes to downstream systems. To achieve this goal, the system must be able to recognize valid data and to alert you when it detects invalid data. In MDS, you create business rules to describe the conditions that cause the data to be considered invalid.
FIGURE 7-7 The Required Fields business rule When creating a business rule, you can use any of the following types of actions: ■ ■ ■ ■ Default Value Sets the default value of an attribute to blank, a specific value that you supply in the business rule, a generated value that increments from a specified starting value, or a value derived by concatenating multiple attribute values Change Value Updates the attribute value to blank, another attribute value, or a value derived by concatenating multiple attr
Transaction Logging MDS uses a transaction log, as shown in Figure 7-9, to capture every change made to master data, including the master data value before and after the change, the user who made the change (not shown), the date and time of the change, and other identifying information about the master data. You can access this log to view all transactions for a model by version in the Version Management area of Master Data Manager.
Integration Master Data Manager also provides support for data integration between MDS and other applications. Master Data Manager includes an Integration Management area for importing and exporting data. However, the import and export processes here are nothing like those of the SQL Server Import And Export wizard.
selected. When the batch processing is complete, you can review the status of the batch in the staging batch log, which is available in Master Data Manager, as shown in Figure 7-11. FIGURE 7-11 The staging batch log If the log indicates any errors for the staging batch, you can select the batch in the log and then view the Staging Batch Errors page to see a description of the error for each record that did not successfully load into the MDS database.
subscription view in Master Data Manager as an entity-based leaf member view, you can query the Product view and see the results in SQL Server Management Studio, as shown in Figure 7-12. FIGURE 7-12 Querying the Product subscription view Administration Of course, Master Data Manager supports administrative functions, too. Administrators use it to manage the versioning process of each master data model and to configure security for individual users and groups of users.
create a new version by copying a previously committed version and allowing users to make their changes to the new version. FIGURE 7-13 Model versions Security MDS uses a role-based authorization system that allows you to configure security both by functional area and by object. For example, you can restrict a user to the Explorer area of Master Data Manager, as shown in Figure 7-14, while granting another user access to only the Version Management and Integration Management areas.
Figure 7-15. An administrator with full access privileges would instead see the full list of functional areas on the home page. FIGURE 7-15 The Master Data Manager home page for a user with only Explorer permissions Data security begins at the model level. When you deny access to a model, the user does not even see it in Master Data Manager. With Read-only access, a user can view the model structure and its data but cannot make changes.
Color table on the right side of the page. These icons indicate that the values in the table are not editable. The first two buttons above the table allow a user with Update permissions to add or delete a member, but those buttons are unavailable here because the user has Readonly permission. The user can also navigate through the hierarchy in the tree view on the left side of the page, but the labels are gray to indicate the Read-only status for every member of the hierarchy.
More specifically, the security configuration allows this user to edit only the Bikes and Accessories categories in the Retail group, but the user cannot edit categories in the Wholesale group. Let’s look first at the effect of these permissions on the user’s experience on the ProductCategory page (shown in Figure 7-19). The lock icon in the first column indicates that the Components and Clothing categories are locked for editing.
Model Deployment When you have finalized the master data model structure, you can use the model deployment capabilities in Master Data Manager to serialize the model and its objects as a package that you can later deploy on another server. In this way, you can move a master data model from development to testing and to production without writing any code or moving data at the table level. The deployment process does not copy security settings.
■ Microsoft.MasterDataServices.Services Contains a class to provide instances of the MdsServiceHost class and a class to provide an API for operations related to business rules ■ Microsoft.MasterDataServices.Services.DataContracts Contains classes to represent models and model objects ■ Microsoft.MasterDataServices.Services.MessageContracts Contains classes to represent requests and responses resulting from MDS operations ■ Microsoft.MasterDataServices.Services.
■ ■ Mdq.RegexIsValid Indicates whether the regular expression is valid Mdq.RegexMask Converts a set of regular expression option flags into a binary value ■ ■ Mdq.RegexMatches Finds all matches of a regular expression in an input string Mdq.RegexReplace Replaces matches of a regular expression in an input string with a different string ■ ■ Mdq.RegexSplit Splits an input string into an array of strings based on the positions of a regular expression within the input string Mdq.
CHAPTER 8 Complex Event Processing with StreamInsight M icrosoft SQL Server StreamInsight is a complex event processing (CEP) engine. This technology is a new offering in the SQL Server family, making its first appearance in SQL Server 2008 R2. It ships with the Standard, Enterprise, and Datacenter editions of SQL Server 2008 R2. StreamInsight is both an engine built to process high-throughput streams of data with low latency and a Microsoft .NET Framework platform for developers of CEP applications.
Similarly, there are certain types of applications that benefit from the ability to analyze data as close as possible to the time that the applications capture the data. For example, companies selling products online often use clickstream analysis to change the page layout and site navigation and to display targeted advertising while a user remains connected to a site. Credit card companies monitor transactions for exceptions to normal spending activities that could indicate fraud.
Data Structures The high-throughput data that StreamInsight requires is known as a stream. More specifically, a stream is a collection of data that changes over time. For example, a Web log contains data about each server hit, including the date, time, page request, and Internet protocol (IP) address of the visitor. If a visitor clicks on several pages in the Web site, the Web log contains multiple lines, or hits, for the same visitor, and each line records a different time.
Event Sources Data feeds Event stores and databases Web servers Devices and sensors Input Adapters Event Event Event Event CEP Engine Standing Queries CEP Application at Run Time Event Event Event Static reference data Event Event Output Adapters Event stores and databases KPI dashboards and SharePoint UI Pagers and monitoring devices Event Targets FIGURE 8-1 StreamInsight architecture Input Adapters The input adapters translate the incoming events into the event format that the CEP engine
Output Adapters The output adapters reverse the operations of the input adapters by translating events into a format that is usable by the target device and then sending the translated data to the device. The development process for an output adapter is very similar to the process you use to develop an input adapter.
If you choose to deploy CEP as a standalone server, there are some limitations that affect the way you develop applications. First, you can use only the explicit server development model (which is described in the next section of this chapter) when developing CEP applications for a standalone server. Second, you must connect to the CEP server by using the Web service Uniform Resource Identifier (URI) of the CEP server host process.
Server database must adapt to the schema of the table that it queries. Instead, you provide the table schema in a configuration specification when the adapter is bound to the query. Conversely, an untyped output adapter receives the event type description, which contains a list of fields, when the query starts. The untyped output adapter must then map the event type to the schema of the destination data source, typically in a configuration specification.
the fields one at a time and enqueue the event. The untyped output adapter works similarly, but instead it must be able to use the configuration specification to retrieve query processing results from a dequeued event. The next step is to develop an AdapterFactory object as a container class for your input and output adapters. You use an AdapterFactory object to share resources between adapter implementations and to pass configuration parameters to adapter constructors.
The final step is to create a .NET assembly for the adapter. At minimum, the adapter includes a constructor, a Start() method, a Resume() method, and either a ProduceEvents() or ConsumeEvents() method, depending on whether you are developing an input adapter or an output adapter. You can see the general structure of the adapter class in the following code example: public class TextFilePointInput : PointInputAdapter { public TextFilePointInput(TextFileInputConfig configInfo, CepEventType cepEventType) { ...
Another task the adapter must perform is classification of an event. That is, the adapter must specify the event kind as either INSERT or Current Time Increment (CTI). The adapter adds events with the INSERT event kind to the stream as it receives data from the source. It uses the CTI event kind to ignore any additional INSERT events it receives afterward that have a start time earlier than the timestamp of the CTI event.
Queries After you create an event stream object, you write a LINQ expression on top of the event stream object. You use LINQ expressions to define the fields for output events, to filter events before query processing, to group events into subsets, and to perform calculations, aggregations, and ranking. You can even use LINQ expressions to combine events from multiple streams through join or union operations. Think of LINQ expressions as the questions you ask of the streaming data.
Event Windows A window represents a subset of data from an event stream for a period of time. After you create a stream of windows, you can perform aggregation, TopK (a LINQ operation described later in this chapter), or user-defined operations on the events that the windows contain. For example, you can count the number of events in each window. You might be inclined to think of a window as a way to partition the event stream by time.
As you might guess, the key to working with windows is to have a clear understanding of the time span that each window covers. There are three types of window streams that StreamInsight supports—hopping windows, snapshot windows, and count windows. In a hopping windows stream, each window spans an equal time period. In a snapshot windows stream, the size of a window depends on the events that it contains.
windows stream. Although you can use the HoppingWindow method to create tumbling windows, there is a TumblingWindow method. The following code illustrates how to count events in tumbling windows that occur every half hour. var outputStream = from eventWindow in inputStream.TumblingWindow(TimeSpan.FromMinutes(30)) select new { count = eventWindow.
Count windows are completely different from the other window types because the size of the windows is variable. When you create windows, you provide a parameter n as a count of events to fulfill within a window. For example, assume n is 2 as shown in Figure 8-5. The first window starts when the first event starts and ends when the second event starts, because a count of 2 events fulfills the specification. The second event also resets the counter to 1 and starts a new window.
Assume you want to apply the Sum and Avg aggregations to field x in an input stream. The following example shows you how to use these aggregations as well as the Count aggregation for each snapshot window: var outputStream = from eventWindow in inputStream.Snapshot() select new { sum = eventWindow.Sum(e => e.x), avg = eventWindow.Avg(e => e.x), count = eventWindow.Count() }; TopK A special type of aggregation is the TopK operation, which you use to rank and filter events in an ordered window stream.
Joins You can use a join operation to match events from two streams. The CEP server first matches events only if they have overlapping time intervals, and then applies the conditions that you specify in the join predicate. The output of a join operation is a new event that combines payloads from the two matched events. Here is the code to join events from two input streams, where field x is the same value in each event.
Query Template Binding The method that the CEP server uses to instantiate the query template as a standing query depends on the development model that you use. If you are using the explicit server development model, you create a query binder object, but you create an event stream consumer object if you are using the implicit server development model. The Query Binder Object In the explicit server development model, you first create explicit input and output adapter objects.
The Query Object In both the explicit and implicit development models, you create a query object. With that object instantiated, you can use the Start() and Stop() methods. The Start() method instantiates the adapters using the adapter factories, starts the event processing engine, and calls the Start() methods for each adapter. The Stop() method sends a message to the adapters that the query is stopping and then shuts down the query.
Windows PowerShell Diagnostics For quick analysis, you can use Windows PowerShell scripts to view diagnostic information rather than writing a complete diagnostic application. Before you can use a Windows PowerShell script, the StreamInsight server must be running a query. If the server is running as a hosted assembly, you must expose the Web service. You start the diagnostic process by loading the Microsoft.
CHAPTER 9 Reporting Services Enhancements I f you thought Microsoft SQL Server 2008 Reporting Services introduced a lot of great new features to the reporting platform, just wait until you discover what’s new in Reporting Services in SQL Server 2008 R2. The Reporting Services development team at Microsoft has been working hard to incorporate a variety of improvements into the product that should make your life as a report developer or administrator much simpler.
Combining Data from More Than One Dataset To display data from more than one source in a table (or in any data region, for that matter), you must create a dataset that somehow combines the data because a data region binds to one and only one dataset. You could create a query for the dataset that joins the data if both sources are relational and accessible with the same authentication.
=Lookup(Fields!StateProvinceCode.Value, Fields!StProv.Value, Fields!StProvName.Value, "Dataset1") The MultiLookup function also requires a one-to-one relationship between the source and destination, but it accepts a set of source values as input. Reporting Services matches each source value to a destination value one by one, and then returns the matching values as an array. You can then use an expression to transform the array into a comma-separated list, as shown in Figure 9-2.
When there is a one-to-many relationship between the source and destination values, you use the LookupSet function. This function accepts a single value from the source dataset as input and returns an array of matching values from the destination dataset. You could then use the Join function to convert the result into a delimited string, as in the example for the MultiLookup function, or you could use other functions that operate on arrays, such as the Count function, as shown in Figure 9-3.
FIGURE 9-4 Aggregation of an aggregation Here is the expression for the value displayed in the Monthly Average row: =Avg(Sum(Fields!SalesAmount.Value,"EnglishMonthName")) Conditional Rendering Expressions The expression language in SQL Server 2008 R2 Reporting Services includes a new global variable that allows you to set the values for “look-and-feel” properties based on the rendering format used to produce the report.
Another option is to use the RenderFormat global variable with the IsInteractive member to set the conditions of a property. For example, let’s say you have a report that displays summarized sales but also allows the user to toggle a report item to display the associated details.
FIGURE 9-6 Changing report variables To write to your report variable, you use the SetValue method of the variable. For example, assume that you have set up the report to insert a page break between group instances, and you want to update the execution time when the group changes. Add a report variable to the report, and then add a hidden text box to the data region with the group used to generate a page break.
Pagination Properties There are three new properties available to manage pagination: Disabled, ResetPageNumber, and PageName. These properties appear in the Properties window when you select a tablix, rectangle, or chart in the report body or a group item in the Row Groups or Column Groups pane. The most common reason you set values for these properties is to define different paging behaviors based on the rendering format, now that the global variable RenderFormat is available.
Last, consider how you can use the PageName property. As one example, instead of using page numbers in an Excel workbook, you can assign a unique name to each sheet in the workbook. You might, for example, use the group expression that defines the page break as the PageName property.
FIGURE 9-9 Synchronized groups Text Box Orientation Each text box has a WritingMode property that by default displays text horizontally. There is also an option to display text vertically to accommodate languages that display in that format. Although you could use the vertical layout for other languages, you probably would not be satisfied with the result because it renders each character from top to bottom.
Data Visualization Prior to SQL Server 2008 R2 Reporting Services, your only option for enhancing a report with data visualization was to add a chart or gauge. Now your options have been expanded to include data bars, sparklines, indicators, and maps. Data Bars A data bar is a special type of chart that you add to your report from the Toolbox window. A data bar shows a single data point as a horizontal bar or as a vertical column.
Sparklines Like data bars, sparklines can be used to include a data visualization alongside the detailed data. Whereas a data bar usually shows a single point, a sparkline shows multiple data points over time, making it easier to spot trends. You can choose from a variety of sparkline types such as columns, area charts, pie charts, or range charts, but most often sparklines are represented by line charts. As you can see in Figure 9-12, sparklines are pretty bare compared to a chart.
FIGURE 9-13 Indicator types After selecting a set of indicators, you associate the set with a value in your dataset or with an expression, such as a comparison of a dataset value to a goal. You then define the rules that determine which indicator properly represents the status. For example, you might create an expression that compares SalesAmount to a goal.
Although you can manually configure the properties for the map and each map layer, the easiest way to get started is to drag a map from the Toolbox window to the report body (if you are using Business Intelligence Development Studio) or click the map in the ribbon (if you are using Report Builder 3.0).
Shared Datasets A shared dataset allows you to define a query once for reuse in many reports, much as you can create a shared datasource to define a reusable connection string. Having shared datasets available on the server also helps SQL Server 2008 R2 Report Builder 3.0 users develop reports more easily, because the dataset queries are already available for users who lack the skills to develop queries without help. The main requirement when creating a shared dataset is to use a shared data source.
Services can respond to a report request faster, and users are generally happier with the reporting system. However, cache storage is not unlimited. Periodically, the cache expires and the next person that requests the report has to wait for the report execution process to complete. A workaround for this scenario is to create a subscription that uses the NULL delivery provider to populate the cache in advance of the first user’s request.
You can publish report parts both from Report Builder 3.0 and Report Designer in Business Intelligence Development Studio. In Report Designer, the Report menu contains the Publish Report Parts command. In the Publish Report Parts dialog box, shown in Figure 9-17, you select the report items that you want to publish. You can replace the report item name and provide a description before publishing.
Although you can publish report parts in Report Designer and Report Builder 3.0, you can only use Report Builder 3.0 to find and use those report parts. More information about Report Builder 3.0 can be found later in this chapter in the “Report Builder 3.0” section. Atom Data Feed SQL Server 2008 R2 Reporting Services includes a new rendering extension to support exporting report data to an Atom service document.
Report Builder 3.0 Report Builder 1.0 was the first release of a report development tool targeted for business users. That version restricted the users to queries based on a report model and supported limited report layout capabilities. Report Builder 2.0 was released with SQL Server 2008 and gave the user expanded capabilities for importing queries from other report definition files or for writing a query on any data source supported by Reporting Services. In addition, Report Builder 2.
FIGURE 9-19 The Report Part Gallery Report Access and Management In this latest release of Reporting Services, you can benefit from a few enhancements that improve access to reports and to management operations in Report Manager, in addition to an additional feature that supports sandboxing of the report server environment. Report Manager Improvements When you open Report Manager for the first time, you will immediately notice the improved look and feel.
FIGURE 9-20 Report Viewer Notice also that the Report Viewer does not include a link to open the report properties. Rather than requiring you to open a report first and then navigate to the properties pages, Report Manager gives you direct access to the report properties from a menu on the report listing page, as shown in Figure 9-21. Another direct access improvement to Report Manager is the ability to test the connection for a data source on its properties page.
Report Viewer Improvements The display of reports is also improved in the Report Viewer available in this release of SQL Server, which now supports AJAX (Asynchronous JavaScript and XML). If you are familiar with earlier versions of Reporting Services, you can see the improvement that AJAX provides by changing parameters or by using drilldown.
SharePoint Integration SQL Server 2008 R2 Reporting Services continues to improve integration with SharePoint. In this release, you find better options for configuring SharePoint 2010 for use with Reporting Services, working with scripts to automate administrative tasks, using SharePoint lists as data sources, and integrating Reporting Services log events with the SharePoint Unified Logging Service.
create the data source using the Microsoft SharePoint List connection type and provide credentials for authentication, you must supply a connection string to the site or subsite in the form of a URL that references the site or subsite. That is, use a connection string such as http://MySharePointWeb/MySharePointSite or http://MySharePointWeb/MySharePointSite /Subsite.
CHAPTER 10 Self-Service Analysis with PowerPivot M any business intelligence (BI) solutions require access to centralized, cleansed data in a data warehouse, and there are many good reasons for an organization to continue to maintain a data warehouse for these solutions. There are even self-service tools available that allow users to build ad hoc reports from this data. But for a variety of reasons, business users cannot limit their analyses to data that comes from the corporate data warehouse.
PowerPivot for Excel PowerPivot for Excel is an add-in that extends the functionality of Excel 2010 to support analysis of large, related datasets on your computer. After installing the add-in, you can import data from external data sources and integrate it with local files, and then develop the presentation objects, all within the Excel environment. You save all your work in a single file that is easy to manage and share.
The Atom Data Feed Provider Last, the add-in installs an Atom data feed provider to allow you to import data from Atom data feeds into a PowerPivot workbook. A data feed provides data to a client application on request. The structure remains the same each time you request data, but the data can change between requests. Usually, you identify the online data source as a URL-addressable HTTP endpoint.
• • • ■ Any database that can be accessed by using an OLE DB provider or an ODBC driver Delimited text files (.txt, .tab, and .csv) Files from Excel 97 through Excel 2010 PowerPivot workbooks published to a PowerPivot-enabled Microsoft SharePoint Server 2010 farm Data feeds • • • • TIP Sybase Files • • • ■ IBM DB2 8.1 SQL Server 2008 R2 Reporting Services Atom data feeds SharePoint lists ADO.NET Data Services Commercial datasets, such as Microsoft Codename “Dallas” (http://pinpoint.
Linked Tables If your data is in an Excel table already, or if you convert a range of data into an Excel table, you can add the table to your workbook in the Excel window and then use the Create Linked Table button to import the data into the PowerPivot window. You can find this button on the PowerPivot ribbon in the Excel window, as shown in Figure 10-3. After the data is available in the PowerPivot window, you can then enhance it by defining relationships with other tables or by adding calculations.
Relationships By building relationships between the data, you can analyze the data as if it all came from a common source. Relationships enable you to use related data in the same PivotTable even though the underlying data actually comes from different sources. Defining relationship between columns into two PowerPivot tables is similar to defining a foreign key relationship between two columns in a relational database.
FIGURE 10-5 Filtering a numeric column by value IMPORTANT Use of a filter is not a security measure. Although a filter effectively hides data from a presentation, anyone who can open the Excel workbook can also clear the filters and view the data if he or she has installed the PowerPivot add-in. Columns As part of the data preparation process, you might need to make changes to column properties.
You can use the Hide and Unhide button on the Design tab (shown in Figure 10-4) to control the appearance of a column in the PowerPivot window and also in the PivotTable Field List. For example, you might choose to display a column in the PowerPivot window, but hide that column in the PivotTable window because you want to use it in a formula for a calculated column.
store the data that you selected for the chart. Just as you do with a standard PivotTable or PivotChart, you select the placeholder and then use the associated field list to select and arrange fields for the selected object, as shown in Figure 10-8. FIGURE 10-8 A PivotChart and PivotTable report Cube Functions As an alternative to the symmetrical layout of a PivotTable, you can use cube functions in cell formulas to arrange PowerPivot data in a free-form arrangement of cells.
FIGURE 10-9 The CUBEVALUE function Slicers The task pane for PowerPivot is similar to the one you use for an Excel PivotTable, but it includes two additional drop zones for slicers. Slicers are a new feature in Excel 2010 that can be associated with PowerPivot. Slices work much like report filters but link to multiple objects, such as a PivotTable and a PivotChart, so that the slicer selection can filter an entire report.
Data Analysis Expressions The ability to combine data from multiple sources into a single PivotTable is amazingly powerful, but you can create even more powerful reports by enriching the PowerPivot data with Data Analysis Expressions (DAX) to add custom aggregations, calculations, and filters to your report. DAX is a new expression language for use with PowerPivot for Excel. DAX formulas are similar to Excel formulas.
report, provide a name for the current PivotTable if you want, and then specify the formula for the measure, as shown in Figure 10-11. FIGURE 10-11 Measure settings DAX Functions The examples shown for a calculated column and a measure are very basic, although representative of the common ways that you would use DAX.
FUNCTION TYPE EXAMPLE DESCRIPTION Statistical =AVERAGEX(ResellerSales, Evaluates the expression in the second argument for each row of the table in the first argument, and then calculates the arithmetic mean [SalesAmount][TotalProductCost]) Text =CONCATENATE([FirstName], [LastName]) Time Intelligence =DATEADD([OrderDate],10,day) Returns a string that joins two text items Returns a table of dates obtained by adding the number of days specified in the second argument (or other period as specified b
SharePoint Farm BrowserView reports Excel 2010 with PowerPivotView or create reports Web front end Application server Excel Web Access Excel Calculation Services Excel Web Service PowerPivot System Service PowerPivot Web Service Analysis ServicesVertiPaq Mode PowerPivot database FIGURE 10-12 PowerPivot for SharePoint Architecture Analysis Services in VertiPaq Mode To support users without the PowerPivot for Excel client, Excel Services connects to a server instance of Analysis Services in Verti
balancing across servers for query processing if multiple servers are available. Furthermore, the PowerPivot System Service manages the connections for active, reusable, and cached connections to PowerPivot workbooks, as well as administrative connections to other PowerPivot System Services on the SharePoint farm. To speed up access to data, the PowerPivot System Service caches a local copy of a workbook and stores it in Program Files\Microsoft SQL Server\MSAS10_50.POWERPIVOT\OLAP\ Backup.
Content Management Content management for PowerPivot is quite simple because the data and the presentation layout are kept in the same document. If they weren’t, you would have to maintain separate files in different formats and then manually integrate them each time one of the files required replacement with fresh data. By storing the PowerPivot workbooks in SharePoint, you can reap the benefits applicable to any content type, such as workflows, retention policies, and versioning.
or right arrow to bring a different thumbnail into the preview area. You can also switch to All Documents view, which allows you to see all the workbooks in a standard document library view. You can then download a document, check documents in or out, or perform any other activity that is permissible within a document library. The Data Feed Library A special type of document library is available for the storage of Atom svc documents, also known as data service documents.
application that can connect to Analysis Services directly can use the PowerPivot Web Service. You simply use the SharePoint URL for the workbook instead of an Analysis Services server name in the connection string of the provider. For example, if you have a workbook named Bike Sales.xlsx in the PowerPivot Gallery located at http:///PowerPivot Gallery, the SharePoint URL to use as an Analysis Services data source is http:///PowerPivot Gallery/Bike Sales.xlsx.
Index A C adapter base classes, 151 AdapterFactory objects, 152 adapters, for CEP applications, 151-154 Admin Console, 122 aggregate functions, 168 AJAX, 186 Analysis Services engine, 123, 190 annotating transactions in MDS, 134 application errors, monitoring, 122 applications, data-tier.
control racks, 112 count windows, 159 CPU overutilized, 92 upgrading online, 63 CREATE DATABASE statement, 118 Create Package wizard, 142 CREATE REMOTETABLE statement, 120 CREATE TABLE statement, 118-120 Create Utility Control Point Wizard, 26-28 CSV (Cluster Shared Volumes).
disk space requirements, 15 DMS (Data Movement Service), 112 document libraries, 204-205 DomainScope property (reports), 173 DWLoader, 122 Dwsql, 122 dynamic report formatting, 169-170, 172-173 dynamic virtual machine storage, 64 E edge event model, 147 edit sessions, 183 enrolling instances, 29-32 enterprise data warehousing.
I importing master data, 135 indicators in reports, 176-177 InfiniBand network, 110, 112 in-place upgrades, 16-17 input adapters, base classes, 151 installing MDS (Master Data Services), 127 instances.
migrating SQL Server installations, 18-19 migrating virtual machines.
PowerPivot for SharePoint application database, 203 architecture of, 201 caching, 203 content management, 204 data feed libraries, 205 data refreshing in, 205 linked documents, creating, 205 overview of, 10, 189, 201 Parallel Data Warehouse and, 123 prerequisites for, 201 System Service, 202-203 PowerPivot Gallery, 204-205 PowerPivot Managed Extension, 203 PowerPivot Management Dashboard, 206 PowerPivot reports, 196-198 PowerPivot Web Service, 203, 205-206 PowerShell.
snapshot windows, 158 software requirements, 15 sparklines, 176 Split function, 167 SQL Azure, 9 SQL Server editions, 11. See also specific editions SQL Server instances. See instances; managed instances SQL Server Management Studio, 49-51 SQL Server objects, 44-45 SQL Server PowerShell. See Windows PowerShell SQL Server Utility.
Utility Explorer.
About the Authors Ross Mistry is a technical architect at the Microsoft Technology Center (MTC) in Silicon Valley. Ross provides executive briefings, architectural design sessions, and proof of concept workshops to organizations located in the Silicon Valley. His core specialty is Microsoft SQL Server, although he also focuses on Windows Active Directory, Microsoft Exchange, and Windows Server Hyper-V.
What do you think of this book? We want to hear from you! To participate in a brief online survey, please visit: microsoft.com/learning/booksurvey Tell us how well this book meets your needs—what works effectively, and what we can do better. Your feedback will help us continually improve our books and learning resources for you.