While deep learning contains incredibly compute intensive tasks, GPUs can reduce training times by a significant factor and are thus almost an essential tool for serious deep learning practitioners. However the GPUs themselves, as well as the surrounding hardware necessary to effectively run them is a significant investment.
In this article, we will provide an analysis of whether to purchase this hardware directly or whether to rent it from a cloud computing vendor.
GPUs are specialised hardware architectures originally designed for real-time video game graphics, but have now grown into a mature High Performance Computing (HPC) platform. This article is largely written for people who possess the time and capital to invest in deep learning R&D. However, it should also be of use to small institutional or a person who is considering a deep learning component to their offering. For those on more modest budgets it can not be stressed enough that you should not invest a great deal of money into building a high-end deep learning setup right away. Since deep learning compute power has become such a commoditised resource it is very straightforward to “try before you buy” using cloud vendors. Once you have more experience at training models, along with a solid quant trading research plan in mind, the hardware specification—and financial outlay—can be tailored for your specific requirements.
Now that the need for GPU hardware has been established the next task is to determine whether to rent GPU-compute resources from “the cloud” or whether to purchase a local GPU desktop workstation.
The answer is heavily dependent upon the type of models being trained, the space of (hyper)parameters being searched, the cost of electricity in your locale, your current algorithmic trading data set up along with your personal preferences, and research style.
Any pricing and performance information will likely become out of date—particularly in the rapidly moving field of deep learning.
For those that are interested in “trying before they buy”, advice is to use GPU rental servers to see how its computing performance works
Another major benefit of renting is that once the model is trained it can be exported and the GPU instance can be terminated. The model can then be executed elsewhere on much cheaper hardware (possibly locally). Hence, once the deep learning research has finished you may be left with a high-powered deep learning machine with nothing to do!
Let’s calculate the costs of owning and running your own GPU server instead or renting it from iRender and with per- minutes payment model.
You are looking to buy 6x GTX 1080 server, it will cost you $500 per card plus another $600 for cheapest peripherals, total of $3600. It does not include electricity and maintenance costs. If you add it up you’ll see that renting GPU server from us is the most beneficial way to save you time and money. Don’t worry if you can’t afford to invest in a whole new graphics card, iRender has created an alternative that is both cheap and powerful, a GPU rental service that goes by the name of GPURental.net, at your disposal for GPU 3d rendering, processing Big Data, or any task that can benefit from parallel processing. We are keeping our costs as low as possible so you can benefit from the best prices on GPU dedicated servers because we utilize one of world’s cheapest electricity, times less expensive than in highly populated regions. This factor is sustainable in the long term, which permits IRender to preserve a fixed competitive price.
Sign up here and start experiencing our services!