Contributing to eo-learn
First of all, thank you for contributing to eo-learn. Any effort in contributing to the library is very much appreciated.
Here is how you can contribute:
All contributors agree to follow our Code of Conduct.
eo-learn is distributed under the MIT license. When contributing code to the library, you agree to its terms and conditions. If you would like to keep parts of your contribution private, you can contact us to discuss about the best solution.
We strive to provide high-quality working code, but bugs happen nevertheless.
When reporting a bug, please check here whether the bug was already reported. If not, open an issue with the bug label and report the following information:
How to reproduce the issue
OS and package versions
This information helps us to reproduce, pinpoint, and fix the reported issue.
If you are not sure whether the odd behaviour is a bug or a feature, best to open an issue and clarify.
Existing feature requests can be found here.
A new feature request can be created by opening a new issue with the enhancement label, and describing how the feature would benefit the eo-learn community. Providing an example use-case would help assessing the scope of the feature request.
The GitHub Pull Request (PR) mechanism is the best option to contribute code to the library. Users can fork the repository, make their contribution to their local fork and create a PR to add those changes to the codebase. GitHub provides excellent tutorials on how the fork and pull mechanism work, and on how to best create a PR.
Existing PRs can be found here. Before creating new PRs, you should check whether someone else has contributed a similar feature, and if so, you can add your input to the existing code review.
The following guidelines should be observed when creating a PR.
Where applicable, create your contribution in a new branch of your fork based on the
developbranch, as the
masterbranch is aligned to the released package on PyPI. Upon completion of the code review, the branch will be merged into
developand, at the next package release, into
Document your PR to help maintainers understand and review your contribution. The PR should include:
Description of contribution;
Link to issue/feature request.
Your contribution should include unit tests, to test correct behaviour of the new feature and to lower the maintenance effort. Bug fixes as well as new features should include unit tests. When submitting the PR, check whether the Travis CI testing returns any errors, and if it does, please try to fix the issues causing failure. A test
EOPatchis made available here with data for each
FeatureType. Unit tests evaluating the correctness of new tasks should use data available in this
EOPatch. New fields useful for testing purposes can be added, but should be consistent with the
EOPatch. To execute all unit tests locally on you machine, use the command:
pytestfrom the main folder. See also the examples/README.md for how to setup you SentinelHub account and local config for testing.
Try to keep contributions small, as this speeds up the reviewing process. In the case of large contributions, e.g. a new complex
EOTask, it’s best to contact us first to review the scope of the contribution.
Keep API compatibility in mind, in particular when contributing a new
EOTask. In general, all new tasks should adhere to the modularity of eo-learn. Check the Section below for more information on how to contribute an
New features or tasks should be appropriately commented using Sphinx style docstrings. The documentation uses the PEP-8 formatting guidelines. Pylint is used to check the coding standard. Therefore, please run
pylint */ *.pyfrom the the main folder, which contains the
pylintrcfile, to make sure your contribution is scored
Get the latest development version by creating a fork and clone the repo:
git clone firstname.lastname@example.org:<username>/eo-learn.git
All eo-learn packages can be installed at once using
python install_all.py -e. To install each package separately, run
pip install -e <package_folder>. We strongly recommend initializing a virtual environment before installing the required packages. For example by using Python Package Index (PyPI) and virtualenv:
cd eo-learn # The following creates the virtual environment in the ".env" folder. virtualenv --python python3 .env source .env/bin/activate # The following installs all eo-learn subpackages and development packages # using PyPI in the activated virtualenv environment. python install_all.py -e pip install -r requirements-dev.txt -r requirements-docs.txt
Or alternatively by using Conda:
cd eo-learn # The following creates the virtual environment named "dev_eolearn" where the # Conda installation in located. ipykernel enables Jupyter (using # nb_conda_kernels) to select the environment as a kernel. ipywidgets enables # Jupyter to show the progress bar of EOExecutor without raising an error. conda create -n dev_eolearn python=3.7 ipykernel ipywidgets conda activate dev_eolearn # The following installs all eo-learn subpackages and development packages # using PyPI in the activated Conda environment. python install_all.py -e pip install -r requirements-dev.txt -r requirements-docs.txt
Note: to reduce later merge conflicts, always pull the latest version of the
develop branch from the upstream eo-learn repository (located here) to your fork before starting the work on your PR.
EOTasks allow to apply eo-learn workflows to different use-cases, adapting to imaging sources and
processing chain. If you think a task is general enough to be useful to the community, then we would
be delighted to include it into the library.
EOTasks are currently grouped by scope, e.g. core, IO, masks, and so on. A list of implemented
tasks can be found in the documentation. The following code snippet shows how
to create your own
class FooTask(EOTask): def __init__(self, foo_params): self.foo_params = foo_params def execute(self, eopatch, *, runtime_params): # do what foo does on input eopatch and return it return eopatch
When creating a new task, bear in mind the following:
Tasks should be as modular as possible, facilitating task re-use and sharing.
EOTaskshould perform a well-defined operation on the input eopatch(es). If the operation could be split into atomic sub-operations that could be used separately, then consider splitting the task into multiple tasks. Similarly, if tasks share the bulk of the implementation but differ in a minority of implementation, consider using Base classes and inheritance. The interpolation tasks represent a good example of this.
Tasks should be as generalizable as possible, therefore hard-coding of task parameters or
EOPatchfeature types should be avoided. Use the
EOTask._parse_featuresmethod to parse input features in a task, and pass task parameters as arguments, either in the constructor, or at run-time.
If in doubt on whether a task is general enough to be of interest to the community, or you are not sure to which sub-package to contribute your task to, send us an email or open a feature request.
Looking forward to include your contributions into eo-learn.