Deep learning is a powerful tool for image analysis. However several limits exist to it's full democratization and it's extension to remote sensing. Most notably, training of deep learning models requires lots of labelised data and computational power. In a lot of cases, labelised data is not easy to acquire and machines with high computational power are expensive.
However, new foundation models trained with self-supervised methods (such as DINO, DINOv2, MAE, SAM) aim to be as generalist as possible and produce features of high quality, even before being trained on a specific downstream task.
With this plugin, we aim to provide an easy to use framework to use these models in an unsupervised way on raster images. The features produced by the models can often already be used to weed out a big part of analysis work using more conventional and lighter techniques than full deep learning. Therefore, one of our goals is that this plugin can be used without any GPU.
As of now, the plugin is not yet published in official QGIS repos, so you have to clone or copy this code into the python plugin directory of QGIS and manualy install.
this is where it probably is located :
# Windows
%APPDATA%\QGIS\QGIS3\profiles\default\python\plugins
# Mac
~/Library/Application\ Support/QGIS/QGIS3/profiles/default/python/plugins
# Linux
~/.local/share/QGIS/QGIS3/profiles/default/python/plugins
Otherwise (for instance if you have several profiles), you can locate it by doing Settings
>User Profiles
>Open active profile folder
.
At first usage, a pop up should appear if necessary dependencies are not detected, that gives the option to install them automatically via pip
.
You can find more detailled instructions in the documentation.
For now, if you want to use a GPU you should install torch manualy following the instructions on https://pytorch.org/get-started/locally/
Autommated GPU dependencies installation is in the works, you can try the gpu-support
branch on this repo.
Documentation is available here.
Our models are created using the timm
librairy, which is widely used in deep learning research. Here is the doc explaining how they handle non-RGB images when loading pre-trained models.
You can create an overlap by selecting a stride smaller than the sampling size of your raster. In the advanced options, you can change how the tiles will be merged afterwards.
This plugin was developped with ViTs in mind as template models. These have spatialy explicit features and divide the image into patches of typially 16x16
or 14x14
pixels. By having a smaller sampling size, you will have better resolution but with less context for the model to work with.
Feel free to fill an issue on GitHub or submit a PR. More detailled environment setup to come.
The feature extraction algorithm was inspired by the Geo-SAM plugin. The dependencies installation popup was modified from code by Deepness plugin.