Name: tensorflow channels: - defaults - nvidia /label /cuda - 11.7.1 dependencies: - python = 3.9 - cudatoolkit = 11.7 - cudnn = 8.1.0 - cuda -nvcc - pip - pip: - tensorflow = 2.11.0 variables: LD_LIBRARY_PATH: "'$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/'" XLA_FLAGS: "'-xla_gpu_cuda_data_dir=$CONDA_PREFIX/lib/'" We can tell TensorFlow where to look by setting the XLA_FLAGS: The libraries we need are already installed but not where TensorFlow looks for it. We’ll need to install some additional libraries associated with it. Some more digging shows this libdevice driver is related to XLA, an optimizing compiler, that is apparently automatically used by Keras. We can hack this using that in the conda implementation conda uses the shell to call export ]] This only lets us set an environment variable, whereas we want to append to it. This has the added benefit that any changed variables are reset when the environment is deactivates. Instead of using the conda activate scripts we can set environment variables with variables. Setting the library path is a bit more complex as suggested we could use activate.d/env_vars.sh, but it would be better if we declare it in our environment.yml. Note that you can also use a more recent version of CUDA providing your GPU is compatible with it, so I used the more recent 11.7 instead. Rather than running the install script we can simply add the dependencies to environment.yml. mkdir -p $CONDA_PREFIX/etc/conda/activate.dĮcho 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/' > $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh The system paths will be automatically configured when you activate this conda environment. export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/įor your convenience it is recommended that you automate it with the following commands. You can do it with following command everytime your start a new terminal after activating your conda environment. conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0Ĭonfigure the system paths. nvidia-smi Then install CUDA and cuDNN with conda. You can use the following command to verify it is installed. This is also the easiest way to install the required software especially for the GPU setup.įirst install the NVIDIA GPU driver if you have not. It creates a separate environment to avoid changing any installed software in your system. Miniconda is the recommended approach for installing TensorFlow with GPU support. Import tensorflow as tf assert tf.config.list_physical_devices( 'GPU') Adding GPU Support
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |