GPU selection
Problem:
Currently when trying to run MLP with another task that is using the GPU an error is presented with "CUDA out of memory" yet the second GPU is not being used.
Research:
According to Stanza, you can set GPU selection with the flag 'CUDA_VISIBLE_DEVICES' https://github.com/stanfordnlp/stanza/issues/390
Solution: We can have both GPU's used by setting one worker to run using gpu-0 and another worker to run using gpu-1 This will at least give us the ability to run more tasks on the GPU's