IBM is merging Google’s artificial intelligence tools with its own cognitive computing technologies, allowing deep-learning systems to more accurately find answers to complex questions or recognize images or voices. Google’s open-source TensorFlow machine-learning tools are being packed into IBM’s PowerAI, which is a toolkit for computer learning. The two can be combined to improve machine learning on IBM’s Power servers. A computer learns as more data fed into its system, much like how a human learns. PowerAI and TensorFlow tools can help track patterns and classify data, and spit out approximate answers to queries. The answers will be more accurate as a computer learns more. Integrating TensorFlow into PowerAI will solve a big problem of installing Google’s machine-learning technologies on Power systems, said Sumit Gupta, vice president of high-performance computing and analytics at IBM. IBM isn’t creating a fork of TensorFlow for PowerAI, though that could be on the roadmap, Gupta said. The company has created a version of the open-source Caffe deep-learning framework for its PowerAI toolkit. Forking TensorFlow to work with specific hardware or applications is commonplace. Specific versions of TensorFlow have been created for use with GPUs from Nvidia, embedded devices, robots, and drones. Integrating and optimizing TensorFlow for PowerAI also speeds up machine-learning. The new PowerAI is designed for IBM Power Systems S822LC server, which has the speedy Nvidia NVlink interconnect to connect to GPUs, where most of the computer learning takes place. The NVlink is faster than the PCI-Express 3.0 interface on most servers today. The S822LC server is designed to work with Nvidia’s Tesla P100 GPU.