Watson Machine Learning Accelerates AI on IBM Power Systems
New Accelerator drives up to 46x faster[i] machine learning training compared to competitors
Enterprise leaders looking to drive business value from artificial intelligence (AI) require an infrastructure composed of AI-optimized hardware and software that breaks performance barriers while also delivering AI insights when, and where, they want them.
And while the potential of AI to revolutionize a business is no longer a fantasy, there are still significant barriers to adoption. One of the largest is the lack of skills within an organization to exploit AI. According to Gartner’s 2019 CIO Survey[ii], when asked to provide the top three challenges to adopting AI by organizations, 54 percent of respondents cited a “lack of necessary staff skills” and 27 percent cited the “complexity of integrating AI with our existing infrastructure”.
IBM has heard similar feedback from our clients, which is why today we’re bringing together AI capabilities from IBM Watson with the AI infrastructure of IBM Systems to ease the barriers to enterprise AI adoption. I am delighted to reveal the new Watson Machine Learning Accelerator (WML Accelerator), a new piece of Watson Machine Learning (WML) designed to help enterprises train and deploy machine learning models built in IBM Watson Studio and monitored with IBM Watson OpenScale.
Making enterprise machine learning a “Snap”
The power of IBM’s AI strategy is in how we approach AI from end to end, including our belief that the foundation of AI is co-optimized hardware and software. When clients leverage purpose-built infrastructure designed, optimized and accelerated for AI, they open themselves up to potential performance gains that can help their business achieve faster insights and support larger enterprise-scale AI projects.
We validated that approach last year at IBM Think 2018, when we demonstrated the performance capabilities of IBM’s SnapML machine learning library running on IBM Power Systems servers to beat Google Cloud in running machine learning on an advertising-focused dataset by 46x[iii], setting a new record for the tera-scale dataset.
Since then, IBM researchers have been hard at work making SnapML a better tool for the enterprise. By integrating new automation features, IBM is making machine learning more accessible for enterprise users that may not have the ninja data scientists on staff to cut down on time intensive, but necessary, tasks in the machine learning workflow like model selection and hyperparameter tuning. By scaling out across a cluster, as well as scaling up across many-core CPUs and powerful modern GPUs, SnapML is designed to identify an accurate model and its hyper-parameter configuration in a timely fashion to help enterprises potentially gain a competitive edge.
“Many users don’t realize how vast the open source machine learning catalogue is, and it can be quite challenging to identify the right tool for your particular data or desired outcome,” said Simon Thompson, Research Computing Infrastructure Architect at the University of Birmingham. “The automated model and library selection capabilities of SnapML greatly reduce the time required to parse through all of these tools, allowing users to begin ML training much more quickly.”
With these new tools, IBM Research created a SnapML-based auto machine learning framework and ran it across five datasets that illustrated enterprise use cases: like predicting the likelihood of a traveler in missing their flight, predicting the likelihood of someone clicking on an online ad, predicting the optimal salary for a job applicant, and in a more fun but rigorous dataset, predicting the likelihood of five random playing cards turning out to be a valid poker hand.
We ran this SnapML-based framework on a cluster of four IBM Power Systems AC922 servers, each equipped with two 20-core IBM POWER9 CPUs and four GPUs. For comparison, two leading open source automated machine learning frameworks were deployed on the exact same configuration. Based on our own internal observations, we saw that the SnapML-based framework was able to reach a specific accuracy level 10x or faster than the compared competing frameworks across all five datasets.
Bringing it all together with IBM Watson on IBM Power Systems
We believe that a singular, cross-IBM AI strategy will best position our clients to deliver AI everywhere. WML Accelerator is the first time that IBM has designed an integrated AI solution across IBM Watson and IBM Power Systems, unifying IBM’s best AI software with IBM’s best AI hardware. In our effort to make AI available anywhere, we’re also announcing IBM Cloud Private (ICP) for Data on IBM Power Systems with IBM Storage. Coupled with Watson on ICP for Data, we’re opening up possibilities for customers to leverage AI, where they want it, when they want it, and with differentiated performance to give them a competitive edge.
To see more AI and IBM Systems news from IBM Think, be sure to read:
- PowerAI Enterprise joins the Watson Family
- The Best of IBM Z and LinuxONE in the public and private cloud
- IBM Think 2019 Newsroom
[i] “Snap ML: A Hierarchical Framework for Machine Learning”, https://arxiv.org/abs/1803.06333
[ii] Gartner, 2019 CIO Survey: CIOs Have Awoken to the Importance of AI, 3 January 2019. ID: G00375246
[iii] “Snap ML: A Hierarchical Framework for Machine Learning”, https://arxiv.org/abs/1803.06333
The post Watson Machine Learning Accelerates AI on IBM Power Systems appeared first on IBM IT Infrastructure Blog.