This part of the documentation is meant for Exegol contributors, those who write code and open pull requests. If adds up to the users documentation.

First things first, once you know on what module you want to contribute (wrapper, images, documentation, resources, etc.) fork it, checkout to the dev branch, then come back to this page to start coding.


A new feature, whether it’s on the wrapper, images, or any other module, must be documented accordingly. Make sure to open a pull request to the appropriate Exegol docs branch on top of your wrapper/images/whatever pull request.

Exegol-docs branches




nothing gets pushed there. This branch is made to merge with the other branches.


Related to the wrapper (Exegol )


Related to the images (Exegol-images )


General purpose

Before pushing a pull request on the documentation repository, it is advised to try and compile locally to make sure there are no errors and everything renders as expected. First, the requirements listed in requirements.txt must be installed (e.g. pip install --user -r ./requirements.txt). Then, the one-liner below can be used to remove any previous build, compile again and open the build in a browser.

rm -r build; make html; open "build/html/community/contributors.html"

Nota bene: in the example above, the open command opens an Internet browser (it’s a macOS command), but it can be replaced by anything else that fits the contributor’s environement (e.g. firefox).


The Docker images are the heart of the Exegol project. A neat choice of tools, configurations, aliases, history commands, and various customizations are prepared in multiple images adapted for multiple uses: web hacking, Active Directory, OSINT (Open Source INTelligence), etc.

If you want to contribute to this part of the project, there are some things you need to know and some rules you need to follow.

Adding a new tool

In order to add a new tool to an image, here is how it goes. First, you need to figure out in what package your tool installation function must go to: packages.

Function structure

When preparing the install function to the package, don’t forget to include the following functions:

  • colorecho "Installing yourtool": this is needed to raise logs inside the CI/CD pipeline

  • catch_and_retry <some command>: this one is optional. When a command uses the Internet and could potentially fail randomly, the catch_and_retry wrapper is here to retry that commands multiple times with increasing time intervals in order to avoid having a whole build fail because of one temporary network error. Nota bene: most standard Internet-involved commands are transparently put behind a catch_and_retry (e.g. git, wget, curl, go, etc.).

  • add-aliases yourtool: if your tool needs to have one or multiple aliases to work properly. You will need to create the aliases file in /sources/assets/shells/aliases.d/ named after your tool. This file must contain the alias(es) to set as follows.

    alias'python3 /opt/tools/yourtool/'
  • add-history yourtool: if it’s relevant to give some command example of your tool. No need to populate the history with a command that’s very short or never used. Using long arguments is preferred. Using environment variables is preferred (e.g. $USER, $PASSWORD, $TARGET, etc.). You will need to create the history file in /sources/assets/shells/history.d/ named after your tool. This file must contain the history command(s) like the example below. --user "$USER" --password "$PASSWORD" --target "$TARGET" --mode enum --user "$USER" --target "$TARGET" --mode unauthenticated
  • add-test-command "testcommand": this is needed by the CI/CD pipeline to conduct unit tests for all tools to make sure they are installed properly before publishing new images. The test command needs to return 0 if the tool works properly, anything else if it doesn’t. For instance, something like --help usually works, but not always! In order to find what command can be used for unit tests, you can do something like --help; echo $? to see what code is returned after the command is executed. Once trick that can be used when the --help command returns something !=0 is to do some grep like --help|& grep 'Usage:'.

  • add-to-list "yourtool,,description": this is used by the CI/CD pipeline to automatically export tools in the Tools list. The format of the entry is standard 3-columns CSV (comma separated values). The first column is the tool name, then the link to the tool, then the description. Be careful to not have more than 2 commas and replace any comma in the description by something else.

In case your tool doesn’t need aliases or history commands, add the following comment at the beggining of the tool install function: # CODE-CHECK-WHITELIST=. Then add a comma-separated list of the exclusions. Below are some examples.

# CODE-CHECK-WHITELIST=add-aliases,add-history

TL;DR, your tool installation function should look something like this:

function install_yourtool() {
    colorecho "Installing yourtool"
    # tool install commands [...]
    add-aliases yourtool
    add-history yourtool
    add-test-command " --help"
    add-to-list "yourtool,,description"

Install standards

When installing a tool, depending on how it gets installed, here are the rules.

  • Most tools have their virtual environment, in order to avoid dependencies conflicts. Python virtual environments must have access to the system site-packages, to avoid redunduncy on already install common dependencies.

  • Most tools are installed either in their own directory in /opt/tools/ or have the binary (or a symlink) in /opt/tools/bin/.

  • Disk space being limited, we’re not pull every code source around. When possible, add the --depth 1 option to your usual git clone command.

The easiest way to install a Python tool is to use pipx.

# from example
python3 -m pipx install git+

# from local sources
git -C /opt/tools/ clone --depth 1
python3 -m pipx install --system-site-packages /opt/tools/yourtool/

But some tools cannot be installed this way, either because they’re missing the or for any other obscure reason. In that case, opt for the “Python (venv)” solution.

Other standards

If your tool opens ports, or if there are credentials at play, please take a look at the corresponding documentations

Multi-architecture builds

Know that Exegol images are built by, and for, AMD64 and ARM64 systems. Most systems are AMD64 (x86_64), but some other people use ARM64 (M1/M2 Apple Sillicon chips, 64bits Raspberry-Pies, …). Whenever possible, try to make sure your tool install function works for both architectures. Rest assured, if you don’t have both architectures at your disposal it’s perfectly fine, we’ll take care of this part for you. If you do, and if your tool installation function includes some commands that differ wether they run on an ARM64 or AMD64 host, you can use the following structure.

if [[ $(uname -m) = 'x86_64' ]]
    # command for AMD64
elif [[ $(uname -m) = 'aarch64' ]]
    # command for ARM64
    criticalecho-noexit "This installation function doesn't support architecture $(uname -m)" && return

Calling the install function

Once the install function is over with, it needs to be called in the function that holds the same name as the package. For instance, if you’re adding your tool install function in the package, you’ll need to call that function in the package_ad() function (usually at the bottom of that file).

It will look something like this.

function package_web() {

Submitting the pull request


Once all your changes are over, and before submitting a pull request, it is advised to test your installation process locally. The Exegol wrapper can be used to build local images. Run exegol install --help to see some examples. You can also run the unit tests yourself by creating

# build the local image
exegol install "testimage" "full" --build-log "/tmp/testimage.log"

# create and start a container for the tests
exegol start "testcontainer" "testimage"

# run the tests (from the container)
cat /.exegol/build_pipeline_tests/all_commands.txt | grep -vE "^\s*$" | sort -u > /.exegol/build_pipeline_tests/all_commands.sorted.txt
python3 /.exegol/build_pipeline_tests/
cat /.exegol/build_pipeline_tests/failed_commands.log


Your pull request needs to be made against the dev branch.

Once you submit your pull request, and once the various changes that may be requested are made, a CI/CD pipeline will run to make sure your code is compliant and that the tool is installed and works as intended. The pipeline may raise some issues, but if they’re not related to your tool (e.g. network issues are common) don’t worry about it. If the errors are due to your tool install, then you’ll need to make the necessary changes to make your install work.

Once everything works, the pull request will be merged, the pipeline will run again in order to test, build and publish a new nightly image. Congrats, you’re now an Exegol contributor!

Temporary fixing a tool

Tools sometimes have their own issues along their development. A temporary fix can be added as follows, in order to let builds pass successfully, while the respective tool is not fixed. The fix depends on the way the tool is supposed to be installed.

Applying the temporary fix for a tool installed through git goes as follows when checking out a previous commit

  1. Find the commit id that made the tool install fail. This can be found in a try & repeat manner by installing the tool in an exegol container, checking out on a commit ID, try installing again, and repeat until it works.

  2. Comment out the inital git clone command.

  3. Add the temporary fix (git clone and git checkout) in a if statement that makes sure the fix won’t stay there forever. The error message will be raised and noticed in the pipeline.

  4. (bonus) create an issue on the repo (if it doesn’t exist already) with the appropriate logs to help the tool’s maintainers notice the installation error and fix it.

function install_TOOL() {
    # git -C /opt/tools/ clone --depth 1
    local temp_fix_limit="YYYY-MM-DD"
    if [ "$(date +%Y%m%d)" -gt "$(date -d $temp_fix_limit +%Y%m%d)" ]; then
      criticalecho "Temp fix expired. Exiting."
      git -C /opt/tools/ clone
      git -C /opt/tools/TOOL checkout 774f1c33efaaccf633ede6e704800345eb313878

Adding to my-resources


This documentation is not written yet… Please contact us if you would like to contribute to this part and don’t know how.



This documentation is not written yet… Please contact us if you would like to contribute to this part and don’t know how.

Signing commits

To make the project as secure as possible, signed commits are now required to contribute to the project. Using signatures for commits on GitHub serves several important purposes :

  • Authentication: it verifies the authenticity of the commit, ensuring that it was indeed made by the person claiming to have made it.

  • Integrity: it ensures that the commit hasn’t been tampered with since it was signed. Any changes to the commit after it has been signed will invalidate the signature.

  • Trust: this ensures that all contributions come from trusted sources.

  • Visibility: on GitHub, signed commits are marked with a “verified” label, giving users and collaborators confidence in the commit’s origin and integrity.

GitHub offers an official documentation on the matter that can be followed to setup and sign commits properly. Exegol’s documentation will sum it up briefly and link to it whenever it’s needed.

While SSH (+ FIDO2) is preferred since it offers better multi-factor signing capabilities (knowledge + hardware possession factors), people that don’t have the required hardware can proceed with GPG or SSH.

Generating a GPG key can be done by following GitHub’s official documentation on the matter (generating a new GPG key). TL;DR, the commands look something like this:

# for the email, indicate your public email ( from
gpg --quick-generate-key "YOUR_NAME <>" ed25519 sign 0
gpg --list-secret-keys --keyid-format=long
gpg --armor --export $KEYID

Once the GPG key is generated, it can be added to the contributor’s GitHub profile. Again, GitHub’s documentation explains how to achieve that (adding a GPG key to your GitHub account).

Once the GPG key is generated and associated to the GitHub account, it can be used to sign commits. In order to achieve that, the contributor must configure git properly on his machine (telling git about your GPG key).

TL;DR: the commands look something like this to set it up for git CLI:

gpg --list-secret-keys --keyid-format=long
git config --global user.signingkey $KEYID

# (option 1) configure locally on a specific repo
cd /path/to/repository && git config commit.gpgsign true

# (option 2) configure for all git operations
git config --global commit.gpgsign true

To set it up on IDEs, proper official documentations can be followed (e.g. GitKraken, PyCharm).


The contributor’s GitHub account can be configured to mark unsigned commits as unverified or partially verified. While it’s not mandatory regarding contributions to Exegol since the requirement is managed on Exegol repositories directly, it’s a nice thing to do. See GitHub’s documentation on Vigilante mode.