Improving sources of sustainable energy is a worldwide problem with environmental and economic security implications. Ying-Yi Hong, distinguished professor of Power Systems and Energy at Chung Yuan…| NVIDIA Technical Blog
The NVIDIA CUDA Deep Neural Network library (cuDNN) is a GPU-accelerated library for accelerating deep learning primitives with state-of-the-art performance. cuDNN is integrated with popular deep…| NVIDIA Technical Blog
I’ve been running a podcast for close to half a decade now, called The Work Item. Publishing new episodes generally takes a bit of time because of all the prep work that needs to happen beforehand. I now get to use AI to automate a pretty tedious part of the process.| den.dev
Builds end-to-end accelerated AI applications and supports edge AI development.| NVIDIA Developer
(The wheel has now been updated to the latest PyTorch 1.0 preview as of December 6, 2018.) You’ve just received a shiny new NVIDIA Turing (RTX 2070, 2080 or 2080 Ti), or maybe even a beautiful Tesla V100, and now you would like to try out mixed precision (well mostly fp16) training on those lovely tensor cores, using PyTorch on an Ubuntu 18.04 LTS x86_64 system. The idea is that these tensor cores chew through fp16 much faster than they do through fp32.| vxlabs.com
Google recently announced the availability of GPUs on Google Compute Engine instances. For my deep learning experiments, I often need more beefy GPUs than the puny GTX 750Ti in my desktop workstation, so this was good news. To make the GCE offering even more attractive, their GPU instances are also available in their EU datacenters, which is in terms of latency a big plus for me here on the Southern tip of the African continent.| vxlabs.com