AI in the Data Center: A Reality Check

Artificial intelligence is poised to transform enterprise data centers, but organizations need to take a practical approach.

Marcia Savage

July 13, 2018

3 Min Read
Network Computing logo

Artificial intelligence has been the theme of many technology conferences this year, including Microsoft Build and Google I/O, giving a clear indication of its importance in the industry. In fact, according to a survey by Forrester Research last year, 70% of enterprises expect to implement AI this year and 20% said they would deploy AI to make decisions. Research by IDC predicts that global spending on cognitive and AI systems will grow nearly 55% this year to $19.1 billion and that 75% of enterprise applications will use AI by 2021.

With this in mind, it's vital that organizations have supporting infrastructure to develop AI applications and provide the speed and performance needed.

However, AI is still in the early stages. There aren’t that many, if any, organizations that have come close to where Google is with AI in the data center. Google has used the DeepMind AI engine to make its data centers more efficient by incorporating a system of neural networks. Doing this effectively requires a firm grasp of the mechanics, extensive training, and huge test sets to validate the data before it is ever put into production. To develop and utilize neural networks correctly, organizations need significant expertise and computing resources. 

That’s not to say organizations can’t put themselves in a position to prepare for AI in the data center so that when the time comes they are ready and prepared. However, there are a number of issues that organizations need to be aware of if they want to use AI effectively. Here are four tips:

Look beyond the hype

If organizations want to assess the benefits of AI properly, it’s important to look past the hype. It’s easy to underestimate the amount of time, knowledge and data required to implement AI systems effectively and there’s a real danger of handing over decision-making to AI too early in the implementation process. AI needs time to learn and to develop in the environment before it can be trusted to make decisions and take actions.

Get a grip on management

Organizations will increasingly need to rely on automation to keep pace with the growth in compute and the distributed nature of compute resources. However, this does not mean you need complex algorithms from neural nets to achieve increased efficiency. Effective data collectors that feed into a condition system are able to inject good data. Along with state machines that take action on changing and relevant conditions, this can provide a very effective step towards creating self-healing data centers. 

AI can provide powerful capabilities, but it's hard to exploit them if the team cannot manage the AI systems and glean insights from the information gathered.

There’s only one Google

As mentioned, whatever objectives an organization might have for AI in the data center, emulating Google should probably not be on its list. Google’s DeepMind AI engine incorporated a system of neural networks, but developing and utilizing those networks effectively requires a huge amount of expertise and computing resources --  resources that most organizations just don’t have.

AI is not the answer to everything

AI is not going to solve every problem. It helps, but it isn’t a magic bullet that can cure everything. So organizations should exercise caution when deploying an AI-driven service. There is no point-and-click, off-the-shelf AI software that makes a data center magically work better. Organizations with this attitude will be set up for failure. It’s key to incorporate AI into facets of an organization, but it is also important to utilize other key data center technologies alongside it.

Right now, there’s a lot of buzz about AI. But there are still plenty of issues that need to be addressed and overcome before it becomes an everyday reality in the data center.

Jason Collier is co-founder of Scale Computing. Collier is responsible for the evangelism marketing of Scale Computing. Previously, Collier was VP of technical operations at Corvigo where he oversaw sales engineering, technical support, internal IT and datacenter operations. Prior to Corvigo, Collier was VP of information technology and infrastructure at Radiate. There he architected and oversaw the deployment of the entire Radiate ad-network infrastructure, scaling it from under one million transactions per month when he started to more than 300 million at its peak.

 

About the Author

Marcia Savage

Executive Editor, Network Computing

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights