Final build: Z800 workstation
The last couple of articles covered setting up a refurbished HP Z800 workstation for work on cryptocurrencies: adding RAID storage and putting in a pair of overclocked EVGA GTX 980 video boards to support GPU mining. The resulting machine is probably more power hungry than the ideal rig, but another way to look at it is the same box has 12 cores and 3 TB of storage with self-funded electricity usage, which makes it a better deal than renting the equivalent capacity on AWS.
I also got a basic Kill-A-Watt meter to measure power usage. With both cards up it fluctuates between 500W and 570W, seeming to average somewhere around 550W. With an old 850W power supply that is probably the outer limits of what it can manage. I would not suggest plugging more power-hungry cards into it like a Titan Z or even a GTX 980 TI without switching to a heftier power supply.
That brings us to software.
Base operating system
The system is currently on a stock Ubuntu Server 14.04 LTS build. If I have time I may package up the whole thing as a Dockerfile, as it seems it's possible to get access to the GPU cards through Docker, and that would give everyone a nice containerized setup that could run locally or in GPU-capable AWS hosts.
First, make sure you have restricted and trusty-backports repositories available:
deb http://us.archive.ubuntu.com/ubuntu/ trusty main restricted
deb-src http://us.archive.ubuntu.com/ubuntu/ trusty main restricted
deb http://us.archive.ubuntu.com/ubuntu/ trusty-backports main restricted universe multiverse
deb-src http://us.archive.ubuntu.com/ubuntu/ trusty-backports main restricted universe multiverse
For newer CMake you need to add one more repository:
sudo apt-add-repository ppa:george-edison55/cmake-3.x
Install NVidia drivers
Download and install the latest LTS branch NVidia drivers; at time of writing it's 352.41.Alternatively, you can use wget:
Run the script, following the prompts to accept the license agreement. You may want to read the Ubuntu binary driver guide for NVidia.
Once done make sure /usr/local/cuda/bin is in your PATH -- if the compile cannot find nvcc, you missed this step.
For this the best thing to do is follow the latest cpp-ethereum docs, but for the short version which I followed, first install Ethereum itself:
sudo apt-get -y update
sudo apt-get -y install language-pack-en-base
sudo dpkg-reconfigure locales
sudo apt-get -y install software-properties-common
wget -O - http://llvm.org/apt/llvm-snapshot.gpg.key | sudo apt-key add -
sudo add-apt-repository "deb http://llvm.org/apt/trusty/ llvm-toolchain-trusty-3.7 main"
sudo add-apt-repository -y ppa:ethereum/ethereum-qt
sudo add-apt-repository -y ppa:ethereum/ethereum
sudo add-apt-repository -y ppa:ethereum/ethereum-dev
sudo apt-get -y update
sudo apt-get -y upgrade
and then install the developer packages:
sudo apt-get -y install build-essential git cmake libboost-all-dev libgmp-dev libleveldb-dev libminiupnpc-dev libreadline-dev libncurses5-dev libcurl4-openssl-dev libcryptopp-dev libjson-rpc-cpp-dev libmicrohttpd-dev libjsoncpp-dev libargtable2-dev llvm-3.7-dev libedit-dev mesa-common-dev ocl-icd-libopencl1 opencl-headers libgoogle-perftools-dev qtbase5-dev qt5-default qtdeclarative5-dev libqt5webkit5-dev libqt5webengine5-dev ocl-icd-dev libv8-dev libz-dev
Pull the latest Genoil code (the default branch is cudaminer-frontier) and build the cudaminer bundle. Note the COMPUTE flag here is optional: I have set it to 52 because the GTX 980 supports Compute 5.2; for a GTX 970 it would be 50 because it supports Compute 5.0.
git clone https://github.com/Genoil/cpp-ethereum
cmake -DBUNDLE=cudaminer -DCOMPUTE=52
The final step once the build completes is to start up ethminer. Note I run it under nohup in case the terminal disconnects. The parameters you use are entirely based on the pool you use or if you solo mine locally; the example below is for Suprnova, one of the bigger Ethereum mining pools. All you need to do is create an Ethereum account, set up a user and at least instance, and then connect.
I had to play around with CUDA block size and grid size and the estimated difficulty parameter to get the optimal hashrate. The final tuned set of parameters I used was as follows, but your mileage may vary:
nohup ./ethminer -F http://eth-mine.suprnova.cc:3000/$USER.$INSTANCE/10 -G --cuda-extragpu-mem 0 --cuda-block-size 128 --cuda-grid-size 2048 --cuda-schedule auto
If it works you'll see something like this for a dual GTX 980 setup:
[OPENCL]:Found suitable OpenCL device [GeForce GTX 980] with 4289396736 bytes of > GPU memory
14:47:56|gpuminer0 workLoop 1 #4db8c3d8… #4db8c3d8…
14:47:56|gpuminer1 workLoop 1 #4db8c3d8… #4db8c3d8…
miner 14:47:56|ethminer Mining on PoWhash #5e49ae02… : 40894464 H/s = 20447232 hashes / 0.5 s
Though it may seem tempting to both CPU and GPU mine at the same time: don't. The increase in power consumption, noise and heat to get an extra 0.5 MH/second or less is just not worth it. You are better off getting multiple GPU cards if you want to increase hashrate.