MyCaffe 0.10.1.169-beta1

A complete C# re-write of Berkeley's open source Convolutional Architecture for Fast Feature Encoding (CAFFE) for Windows C# Developers with full On-line Help, now with NoisyNet, Deep Q-Network and Policy Gradient Reinforcement Learning, cuDNN LSTM Recurrent Learning, and Neural Style Transfer support!

This is a prerelease version of MyCaffe.
Install-Package MyCaffe -Version 0.10.1.169-beta1
dotnet add package MyCaffe --version 0.10.1.169-beta1
<PackageReference Include="MyCaffe" Version="0.10.1.169-beta1" />
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add MyCaffe --version 0.10.1.169-beta1
The NuGet Team does not provide support for this client. Please contact its maintainers for support.

CUDA 10.1.168, cuDNN 7.6.1, nvapi 410, Native Caffe up to 10/24/2018, Windows 10-1903, Driver 430.86

MyCaffe[1] (a complete C# re-write of CAFFE[2]) now supports Deep Q-Learning[3][4] with a NoisyNet[5] and Prioritized Replay Buffer[6], all supported by the new DQN trainer and do so with the newly released CuDNN 7.6.1 and the dual RNN/RL training that allows multi-pass training where the first pass involves RNN training and the second pass involves RL training that uses the already trained RNN side of the model.

IMPORTANT NOTE: When using TCC mode, we recommend that ALL headless GPU’s are placed in TCC mode for we have experienced stability issues when using a mix of TCC and WDM modes with headless GPU’s.

REQUIRED SOFTWARE:
1.) Install NVIDIA CUDA 10.1.168 which you can download from https://developer.nvidia.com/cuda-downloads
2.) Install NVIDIA cuDNN 7.6.1 which you can download from https://developer.nvidia.com/cudnn
3.) Download and install Microsoft SQL Express 2016 (or later).

This release of the MyCaffe AI Platform and Test Applications has the following new additions:

  • CUDA 10.1.168/cuDNN 7.6.1 supported (with driver 430.86).
  • Windows 1903, OS Build 18362.207 now supported.
  • New Deep Q-Learning DQN trainer.
  • New NoisyNet support added to Inner-Product Layers.
  • Added FrameSkip property to Atari Gym.
  • Added Rally End and Negative Reward support to Atari Gym.
  • Added ATARI 'breakout' ROM to MyCaffe Test Application.
  • Added new MyCaffe.gym.python.dll for easy gym integration with Python.
  • Added new CudaDnn.sqrt_scale function.
  • Added new CudaDnn.ger function.

The following bug fixes are in this release:

  • Fixed bugs in MemoryLoss backward related to EnableLoss=True.

Easily run Neural Style, train Deep Q-Learning[3][4] or Policy Gradient[1] models to beat Pong or Cart-Pole, or create the CIFAR-10 and MNIST datasets using the MyCaffe Test Application which you can download from the MyCaffe GitHub site.

Create and train the Deep Q-Learning[3][4], Policy Gradient[1], Neural Style Transfer, Recurrent Learning, Policy Gradient Reinforcement Learning, Auto-Encoder, DANN and ResNet models by following step-by-step instructions in the SignalPop Tutorials. And, to see other cool examples that show what MyCaffe can do, see the SignalPop Examples.

If you would like to visually design, develop, test and debug your models, see the SignalPop AI Designer specifically designed to enhance your MyCaffe deep learning.

Also, check out the SignalPop Universal Miner that not only keeps your GPU's cool as you train, but also gives you detailed information on each of your GPU's (such as temperature, fan speed, overclock, and usage), and allows you to easily mine Ethereum. When not training AI, put those GPU's to use making some Ether - never let a good GPU go to waste!

Happy ‘deep’ learning!

[1] MyCaffe: A Complete C# Re-Write of Caffe with Reinforcement Learning by D. Brown, 2018.

[2] Caffe: Convolutional Architecture for Fast Feature Embedding by Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, Trevor Darrell, 2014, arXiv:1408.5093

[3] GitHub: Google/dopamine licensed under the Apache 2.0 License;

[4] Dopamine: A Research Framework for Deep Reinforcement Learning by Pablo Samuel Castro, Subhodeep Moitra, Carles Gelada, Saurabh Kumar, Marc G. Bellemare, 2018, arXiv:1812.06110

[5] Noisy Networks for Exploration by Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Ian Osband, Alex Graves, Vlad Mnih, Remi Munos, Demis Hassabis, Olivier Pietquin, Charles Blundell, Shane Legg, 2018, arXiv:1706.10295

[6] Prioritized Experience Replay by Tom Schaul, John Quan, Ioannis Antonoglou, David Silver, 2016, arXiv:1511.05952

CUDA 10.1.168, cuDNN 7.6.1, nvapi 410, Native Caffe up to 10/24/2018, Windows 10-1903, Driver 430.86

MyCaffe[1] (a complete C# re-write of CAFFE[2]) now supports Deep Q-Learning[3][4] with a NoisyNet[5] and Prioritized Replay Buffer[6], all supported by the new DQN trainer and do so with the newly released CuDNN 7.6.1 and the dual RNN/RL training that allows multi-pass training where the first pass involves RNN training and the second pass involves RL training that uses the already trained RNN side of the model.

IMPORTANT NOTE: When using TCC mode, we recommend that ALL headless GPU’s are placed in TCC mode for we have experienced stability issues when using a mix of TCC and WDM modes with headless GPU’s.

REQUIRED SOFTWARE:
1.) Install NVIDIA CUDA 10.1.168 which you can download from https://developer.nvidia.com/cuda-downloads
2.) Install NVIDIA cuDNN 7.6.1 which you can download from https://developer.nvidia.com/cudnn
3.) Download and install Microsoft SQL Express 2016 (or later).

This release of the MyCaffe AI Platform and Test Applications has the following new additions:

  • CUDA 10.1.168/cuDNN 7.6.1 supported (with driver 430.86).
  • Windows 1903, OS Build 18362.207 now supported.
  • New Deep Q-Learning DQN trainer.
  • New NoisyNet support added to Inner-Product Layers.
  • Added FrameSkip property to Atari Gym.
  • Added Rally End and Negative Reward support to Atari Gym.
  • Added ATARI 'breakout' ROM to MyCaffe Test Application.
  • Added new MyCaffe.gym.python.dll for easy gym integration with Python.
  • Added new CudaDnn.sqrt_scale function.
  • Added new CudaDnn.ger function.

The following bug fixes are in this release:

  • Fixed bugs in MemoryLoss backward related to EnableLoss=True.

Easily run Neural Style, train Deep Q-Learning[3][4] or Policy Gradient[1] models to beat Pong or Cart-Pole, or create the CIFAR-10 and MNIST datasets using the MyCaffe Test Application which you can download from the MyCaffe GitHub site.

Create and train the Deep Q-Learning[3][4], Policy Gradient[1], Neural Style Transfer, Recurrent Learning, Policy Gradient Reinforcement Learning, Auto-Encoder, DANN and ResNet models by following step-by-step instructions in the SignalPop Tutorials. And, to see other cool examples that show what MyCaffe can do, see the SignalPop Examples.

If you would like to visually design, develop, test and debug your models, see the SignalPop AI Designer specifically designed to enhance your MyCaffe deep learning.

Also, check out the SignalPop Universal Miner that not only keeps your GPU's cool as you train, but also gives you detailed information on each of your GPU's (such as temperature, fan speed, overclock, and usage), and allows you to easily mine Ethereum. When not training AI, put those GPU's to use making some Ether - never let a good GPU go to waste!

Happy ‘deep’ learning!

[1] MyCaffe: A Complete C# Re-Write of Caffe with Reinforcement Learning by D. Brown, 2018.

[2] Caffe: Convolutional Architecture for Fast Feature Embedding by Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, Trevor Darrell, 2014, arXiv:1408.5093

[3] GitHub: Google/dopamine licensed under the Apache 2.0 License;

[4] Dopamine: A Research Framework for Deep Reinforcement Learning by Pablo Samuel Castro, Subhodeep Moitra, Carles Gelada, Saurabh Kumar, Marc G. Bellemare, 2018, arXiv:1812.06110

[5] Noisy Networks for Exploration by Meire Fortunato, Mohammad Gheshlaghi Azar, Bilal Piot, Jacob Menick, Ian Osband, Alex Graves, Vlad Mnih, Remi Munos, Demis Hassabis, Olivier Pietquin, Charles Blundell, Shane Legg, 2018, arXiv:1706.10295

[6] Prioritized Experience Replay by Tom Schaul, John Quan, Ioannis Antonoglou, David Silver, 2016, arXiv:1511.05952

Release Notes

MyCaffe AI Platform

This package is not used by any popular GitHub repositories.

Version History

Version Downloads Last updated
0.10.1.169-beta1 69 7/8/2019
0.10.1.145-beta1 85 5/31/2019
0.10.1.48-beta1 97 4/18/2019
0.10.1.21-beta1 92 3/5/2019
0.10.0.190-beta1 169 1/15/2019
0.10.0.140-beta1 110 11/29/2018
0.10.0.122-beta1 134 11/15/2018
0.10.0.75-beta1 130 10/7/2018