I tried selection #2:
"There are 2 choices for the alternative python3 (providing /usr/bin/python3).
Selection Path Priority Status
- 0 /usr/bin/python3.10 2 auto mode
1 /usr/bin/python3.10 2 manual mode
2 /usr/bin/python3.8 1 manual mode"
But, got this error during the “install requirements” step:
ERROR: google-auth 2.19.1 has requirement urllib3<2.0, but you’ll have urllib3 2.0.2 which is incompatible.
Installing collected packages: packaging, cycler, fonttools, kiwisolver, zipp, importlib-resources, six, python-dateutil, contourpy, pyparsing, matplotlib, opencv-python, PyYAML, charset-normalizer, certifi, urllib3, idna, requests, scipy, tqdm, importlib-metadata, markdown, pyasn1, pyasn1-modules, cachetools, rsa, google-auth, grpcio, absl-py, tensorboard-data-server, protobuf, oauthlib, requests-oauthlib, google-auth-oauthlib, MarkupSafe, werkzeug, tensorboard, pytz, tzdata, pandas, seaborn, thop
Successfully installed MarkupSafe-2.1.3 PyYAML-6.0 absl-py-1.4.0 cachetools-5.3.1 certifi-2023.5.7 charset-normalizer-3.1.0 contourpy-1.0.7 cycler-0.11.0 fonttools-4.39.4 google-auth-2.19.1 google-auth-oauthlib-1.0.0 grpcio-1.54.2 idna-3.4 importlib-metadata-6.6.0 importlib-resources-5.12.0 kiwisolver-1.4.4 markdown-3.4.3 matplotlib-3.7.1 oauthlib-3.2.2 opencv-python-4.7.0.72 packaging-23.1 pandas-2.0.2 protobuf-4.23.2 pyasn1-0.5.0 pyasn1-modules-0.3.0 pyparsing-3.0.9 python-dateutil-2.8.2 pytz-2023.3 requests-2.31.0 requests-oauthlib-1.3.1 rsa-4.9 scipy-1.10.1 seaborn-0.12.2 six-1.16.0 tensorboard-2.13.0 tensorboard-data-server-0.7.0 thop-0.1.1.post2209072238 tqdm-4.65.0 tzdata-2023.3 urllib3-2.0.2 werkzeug-2.3.4 zipp-3.15.0
WARNING: The following packages were previously imported in this runtime:
[cycler,dateutil,kiwisolver,matplotlib,mpl_toolkits,six]
You must restart the runtime in order to use newly installed versions.
Then, after the “!python setup.py develop” step, I got:
/content/yolov5_obb/utils/nms_rotated
running develop
running egg_info
writing nms_rotated.egg-info/PKG-INFO
writing dependency_links to nms_rotated.egg-info/dependency_links.txt
writing top-level names to nms_rotated.egg-info/top_level.txt
/usr/local/lib/python3.8/dist-packages/torch/utils/cpp_extension.py:381: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja… Falling back to using the slow distutils backend.
warnings.warn(msg.format(‘we could not find ninja.’))
reading manifest file ‘nms_rotated.egg-info/SOURCES.txt’
writing manifest file ‘nms_rotated.egg-info/SOURCES.txt’
running build_ext
Traceback (most recent call last):
File “setup.py”, line 38, in
setup(
File “/usr/lib/python3/dist-packages/setuptools/init.py”, line 144, in setup
return distutils.core.setup(**attrs)
File “/usr/lib/python3.8/distutils/core.py”, line 148, in setup
dist.run_commands()
File “/usr/lib/python3.8/distutils/dist.py”, line 966, in run_commands
self.run_command(cmd)
File “/usr/lib/python3.8/distutils/dist.py”, line 985, in run_command
cmd_obj.run()
File “/usr/lib/python3/dist-packages/setuptools/command/develop.py”, line 38, in run
self.install_for_development()
File “/usr/lib/python3/dist-packages/setuptools/command/develop.py”, line 140, in install_for_development
self.run_command(‘build_ext’)
File “/usr/lib/python3.8/distutils/cmd.py”, line 313, in run_command
self.distribution.run_command(command)
File “/usr/lib/python3.8/distutils/dist.py”, line 985, in run_command
cmd_obj.run()
File “/usr/lib/python3/dist-packages/setuptools/command/build_ext.py”, line 87, in run
_build_ext.run(self)
File “/usr/lib/python3.8/distutils/command/build_ext.py”, line 340, in run
self.build_extensions()
File “/usr/local/lib/python3.8/dist-packages/torch/utils/cpp_extension.py”, line 404, in build_extensions
self._check_cuda_version()
File “/usr/local/lib/python3.8/dist-packages/torch/utils/cpp_extension.py”, line 781, in _check_cuda_version
raise RuntimeError(CUDA_MISMATCH_MESSAGE.format(cuda_str_version, torch.version.cuda))
RuntimeError:
The detected CUDA version (11.8) mismatches the version that was used to compile
PyTorch (11.3). Please make sure to use the same CUDA versions.
Were you able to get around the above problems?