There's been an exciting wave in technology that's bringing self-driving cars, augmented reality and artificial intelligence out of the research lab and into our world. Last month, technologists from a broad range of industries gathered in Silicon Valley to compare notes on how one technology - the Graphics Processing Unit (GPU) - is accelerating several advancements in aerospace, life science, gaming, security, transportation and many more industries.
The GPU is the computing muscle that's enabling technology to crunch data at record speeds. I know there's been much talk of "big data" in supercomputing recently; the GPU's niche is the handling of bulky data. Bulky data includes images, sounds and other sensory information that our brains are naturally built to handle, but computer processors have had difficulty with in the past. At this year's GPU Technology Conference (GTC), some key themes emerged where this computing power will soon have a big impact in the industrial space - if it hasn't started hitting already. Here are some of the developments to look out for:
Being able to sense images and sounds in real time is just one part of human perception - there's also all of the information surrounding the object that gives it meaning and context. Deep Learning involves using multiple and parallel methods of training computers to recognize images or sounds, then running huge data sets to test and improve its recognition skills. Researchers have pushed this technology forward through developer competitions and crowd-sourced projects held over the past few years. It's developed to the point where Deep Learning-based machine vision has surpassed human image recognition in both speed and accuracy for several types of image data. Oddly enough, the GPU-driven "neural networks" that identify and classify image data are really good at working with natural objects: Not only can a neural network spot a dog in a photo, it can identify the breed.
Deep Learning currently has the most difficulty identifying certain types of man-made objects - especially those that are thin, flat, reflective and/or with straight edges. But as the recognition methods improve, things we can do now with current sensing and perception technology in robotics will soon be done more quickly, more reliably and eventually implemented more simply. Deep Learning could soon make for synaptically fast and reliable identification methods for any industry that makes or moves things - opening new ways to automate that weren't viable before. And it's not just the part or product that could be identified, but all the data the comes with it: parametric data, SKU data, etc. In robotics, that visual recognition could trigger a stored task - instantly inspect a part for defects, or analyze the texture of a welding weave.
Imagine walking into an empty spot on the manufacturing floor and taking a panoramic photo with your tablet. It then renders a virtualized version of the space you're standing in - right in your web browser. Now you drop in 3D models of your equipment for a new production cell, drag them, reorient them - maintaining scale. This is where WebGL enablement is heading for our device-driven tech. The "GL" in WebGL stands for graphics library. While the GPU at the other end of your network delivers this bulky data, it's actually the local resources in your web browser that can enable fast, realistic rendering of the models. What used to take hours to render can be done within the attention span of an iPad user - that's quick!
Six months ago, I could barely spell GPU, so I must say I was pretty floored to be presenting at the GPU Technology Conference to the people who are developing these breakthroughs. My talk related a use case of this technology in a recent virtual trade show booth we created for the website. Even in the six months since we started the project, the capabilities of the GPU-accelerated, WebGL-enabled platform can now handle even bulkier polygon data, render 3D models faster and more realistically, and deliver to a greater share of devices.
Anyone who's had to convert and send a 3D CAD file can see and appreciate how this could also improve the workflow in design reviews with customers. It could also deliver interactive service manuals online with 360-degree part orientation or engaging training modules, and overall, a more effective way to communicate visually - leveraging the 3D assets you're already working with.
High Performance Computing Networks
Speaking of bulky data, what about the day-to-day workflow in a production or product development environment? GPU-driven technology is enabling popular 3D CAD software to be offered from the cloud via High Performance Computing (HPC) networks. These networks can also run network-installed 3D CAD, simulators and all the other bulky software you license - delivering to a virtual desktop environment in any device's web browser. Design engineers can say goodbye to the two-workstation setup, freeing them to work with big files and collaborate from any device, anywhere. HPC technology is sure to speed up time to market in product development, and enable IT departments to better manage the day-to-day of your network - whether it's internal or cloud-hosted.
As manufacturing, packaging and logistics operations continue to modernize, it seems that we're all becoming technology companies in a way. The GPU - much like its data-delivery predecessors - is an example of enabling technology that can be implemented across a diverse number of industries. When you explore the possibilities and find a way to adapt it to your industry, it sets off even more advancements.