User Manual
> White Paper | Best Practices in Digital Transformation
15
uDigital transformation is considered as a long term proposition
therefore best practice can be used as a means to progress through
the steps through which a strategy to coordinate the requirements
of the company with the capabilities of technology can be
developed and implemented. The strategy will progress through
stages of project-by-project application towards one where all of
the company is drawn together, and it will become increasingly
more proactive in enabling the company to become a disruptive
force in its own right:
Figure 7: The Digital Transformation Critical Path
Figure 8: The Deployment of AI to Improve Energy Eciency in
Google Data Centers
Source: DCD 2017
Source: Google
EMBRYONIC: No real strategy in place
NASCENT: Loose strategy with implementaton
limited to ad-hoc projects
EVOLVING: Development strategy coordinating IT
& business goals in place
ASCENDANT: Strategy used to drive innovation
and disruption, and self-correcting
MATURE: Strategy integrated and eective
Within each stage of progression the implementation of ‘best
practice’ can be represented by a process of:
1. Planning: in order to establish targets, responsibilities and
process
2. Mapping: ensuring that decisions reflect the objectives
established through planning, through mapping the IT outcomes
onto the corporate requirements
3. Decision making: making the correct choice of technology for
the process
4. Implementation of the transformation
5. Review/learning to feed back into future strategies.
So, in what situations have data centers (of whatever kind)
developed and applied digital transformation strategies?
Google has since 2011 developed algorithms to improve the
accuracy and relevance of their search functions. Within its data
centers, it has used DeepMind AI to cut its data center energy
bills by putting the AI system in charge of power use in parts of its
data centers. This has led to a reduced requirement of power for
cooling which is in most Data centers, the largest non-IT consumer
of power. To achieve this, the neural networks control around 120
variables and takes data from sensors located across the server
racks. The system is self-improving – the analysis of data allows
further sensors to be deployed to improve the accuracy of the
intelligence. Google claimed to have achieved a PUE of 1.12 across
its Data centers by 2014, an average that it has maintained into 2017.
As businesses move towards multiple environments to meet their
IT needs and move away from the data center towards the concept
of data infrastructure (in which MTDCs usually play a role) so there
needs to be visibility and coordination across dierent parts of the
infrastructure and the networking, computing and storage functions.
For the MTDC, profitability is based on minimising wastage in
particular of resources that form part of the business model –
space, power, networks.
“It’s about controlling costs, looking especially at cooling and power
but as the operation becomes more complex so there are a lot
of possible sources of cost. Money that cannot be charged back
comes o the bottom line.” [IT Services]
“We are getting real pressure from the top – we need to cut costs
in the data center and improve eciency. It’s strange – as our data
center becomes more valuable so it becomes more of a focus of
cost cutting”. [Financial Services]
This is the monitoring role that has been fulfilled in the past
by a variety of technologies including DCIM software, building
management systems [BMS] and systems provided by infrastructure
suppliers for their suite of products. DCIM has evolved in line
with changing data center profiles to take data from both the
infrastructure and IT stacks and analyse it to facilitate eciency and
quality of operations.
As the data center becomes more digitized in terms of its
infrastructure stack – through equipment that is coordinated and
managed through operational systems, through the software
defined systems adopted to reduce the costs of infrastructure,
improve eciency and to act as the most eective access into
cloud systems. Software-defined infrastructure (SDI) can be defined
as technical computing infrastructure entirely under the control
of software with no operator or human intervention. It operates
independent of any hardware-specific dependencies. u