Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add AWS KMS support to SignerStore #452

Merged
merged 4 commits into from
Mar 10, 2024
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 12 additions & 19 deletions docs/source/guide/Docker_README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ Here are some things you need to know:
[Secure Systems Library](https://github.com/secure-systems-lab/securesystemslib).
If you do not have a key we suggest you use the [RSTUF CLI tool to generate the key](https://repository-service-tuf.readthedocs.io/en/latest/guide/repository-service-tuf-cli/index.html).
* This key must be the same one used during the [RSTUF CLI ceremony](https://repository-service-tuf.readthedocs.io/en/latest/guide/repository-service-tuf-cli/index.html#ceremony-ceremony).
* This key must be available to RSTUF Worker using the `RSTUF_KEYVAULT_BACKEND`.

For more information read the [Deployment documentation](https://repository-service-tuf.readthedocs.io/en/latest/guide/deployment/index.html).

Expand All @@ -41,10 +40,7 @@ For more information read the [Deployment documentation](https://repository-serv

```shell
docker run --env="RSTUF_STORAGE_BACKEND=LocalStorage" \
--env="RSTUF_LOCAL_STORAGE_BACKEND_PATH=storage" \
--env="RSTUF_KEYVAULT_BACKEND=LocalKeyVault" \
--env="RSTUF_LOCAL_KEYVAULT_PATH=keyvault" \
--env="RSTUF_LOCAL_KEYVAULT_KEYS=online.key,strongPass" \
--env="RSTUF_LOCAL_STORAGE_BACKEND_PATH=/metadata" \
--env="RSTUF_BROKER_SERVER=guest:guest@rabbitmq:5672" \
--env="RSTUF_REDIS_SERVER=redis://redis" \
--env="RSTUF_SQL_SERVER=postgresql://postgres:secret@postgres:5432" \
Expand Down Expand Up @@ -132,31 +128,31 @@ Available types:

##### `AWSS3` (AWS S3)

* (Required) ``RSTUF_AWSS3_STORAGE_BUCKET``
* (Required) ``RSTUF_AWS_STORAGE_BUCKET``

The name of the region associated with the S3.
The name of s3 bucket to use.
kairoaraujo marked this conversation as resolved.
Show resolved Hide resolved

* (Required) ``RSTUF_AWSS3_STORAGE_ACCESS_KEY``
* (Required) ``RSTUF_AWS_ACCESS_KEY_ID``

The access key to use when creating the client session to the S3.

This environment variable supports container secrets when the ``/run/secrets``
volume is added to the path.
Example: `RSTUF_AWSS3_STORAGE_ACCESS_KEY=/run/secrets/S3_ACCESS_KEY`
Example: `RSTUF_AWS_ACCESS_KEY_ID=/run/secrets/S3_ACCESS_KEY`

* (Required) ``RSTUF_AWSS3_STORAGE_SECRET_KEY``
* (Required) ``RSTUF_AWS_SECRET_ACCESS_KEY``

The secret key to use when creating the client session to the S3.

This environment variable supports container secrets when the ``/run/secrets``
volume is added to the path.
Example: ``RSTUF_AWSS3_STORAGE_ACCESS_KEY=/run/secrets/S3_SECRET_KEY``
Example: ``RSTUF_AWS_SECRET_ACCESS_KEY=/run/secrets/S3_SECRET_KEY``

* (Optional) ``RSTUF_AWSS3_STORAGE_REGION``
* (Optional) ``RSTUF_AWS_DEFAULT_REGION``

The name of the region associated with the S3.

* (Optional) ``RSTUF_AWSS3_STORAGE_ENDPOINT_URL``
* (Optional) ``RSTUF_AWS_ENDPOINT_URL``

The complete URL to use for the constructed client. Normally, the
client automatically constructs the appropriate URL to use when
Expand All @@ -177,13 +173,12 @@ In most use cases, the timeout of 60.0 seconds is sufficient.

#### `RSTUF_KEYVAULT_BACKEND`

Select a supported type of Key Vault Service.
Available types:
Select a supported type of Key Vault Service.

* `LocalKeyVault` (container volume)

**_NOTE:_** You can start the worker
service without a keyvault backend, but you need to configure one before the
service without a keyvault backend, but you need to configure one before the
[bootstrap ceremony](https://repository-service-tuf.readthedocs.io/en/latest/guide/repository-service-tuf-cli/index.html#ceremony-ceremony).

##### `LocalKeyVault` (container volume)
Expand Down Expand Up @@ -232,7 +227,7 @@ service without a keyvault backend, but you need to configure one before the

Example: ``RSTUF_LOCAL_KEYVAULT_KEYS=/run/secrets/ONLINE_KEY_1:/run/secrets/ONLINE_KEY_2``

#### (Optional, *experimental*) `RSTUF_ONLINE_KEY_DIR`
#### (Optional) `RSTUF_ONLINE_KEY_DIR`

Directory path for online signing key file. Expected file format is unencrypted PKCS8/PEM.

Expand All @@ -246,8 +241,6 @@ Example:
- RSTUF worker expects related private key under `/run/secrets/<file name>`




#### (Optional) `RSTUF_WORKER_ID`

Custom Worker ID. Default: `hostname` (Container hostname)
Expand Down
20 changes: 10 additions & 10 deletions repository_service_tuf_worker/services/storage/awss3.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,10 +40,10 @@ def __init__(

@classmethod
def configure(cls, settings: Dynaconf) -> "AWSS3":
access_key = parse_if_secret(settings.AWSS3_STORAGE_ACCESS_KEY)
secret_access_key = parse_if_secret(settings.AWSS3_STORAGE_SECRET_KEY)
region = settings.get("AWSS3_STORAGE_REGION")
endpoint = settings.get("AWSS3_STORAGE_ENDPOINT_URL")
access_key = parse_if_secret(settings.AWS_ACCESS_KEY_ID)
secret_access_key = parse_if_secret(settings.AWS_SECRET_ACCESS_KEY)
region = settings.get("AWS_DEFAULT_REGION")
endpoint = settings.get("AWS_ENDPOINT_URL")

s3_session = boto3.Session(
aws_access_key_id=access_key,
Expand All @@ -58,7 +58,7 @@ def configure(cls, settings: Dynaconf) -> "AWSS3":
endpoint_url=endpoint,
)
buckets = [bucket.name for bucket in s3_resource.buckets.all()]
bucket_name = settings.AWSS3_STORAGE_BUCKET
bucket_name = settings.AWS_STORAGE_BUCKET
if bucket_name not in buckets:
raise ValueError(f"Bucket '{bucket_name}' not found.")

Expand All @@ -78,23 +78,23 @@ def configure(cls, settings: Dynaconf) -> "AWSS3":
def settings(cls) -> List[ServiceSettings]:
return [
ServiceSettings(
names=["AWSS3_STORAGE_BUCKET"],
names=["AWS_STORAGE_BUCKET"],
required=True,
),
ServiceSettings(
names=["AWSS3_STORAGE_ACCESS_KEY"],
names=["AWS_ACCESS_KEY_ID"],
required=True,
),
ServiceSettings(
names=["AWSS3_STORAGE_SECRET_KEY"],
names=["AWS_SECRET_ACCESS_KEY"],
required=True,
),
ServiceSettings(
names=["AWSS3_STORAGE_REGION"],
names=["AWS_DEFAULT_REGION"],
required=False,
),
ServiceSettings(
names=["AWSS3_STORAGE_ENDPOINT_URL"],
names=["AWS_ENDPOINT_URL"],
required=False,
),
]
Expand Down
23 changes: 17 additions & 6 deletions repository_service_tuf_worker/signer.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,20 +25,20 @@ class FileNameSigner(CryptoSigner):
Provide method to load **unencrypted** PKCS8/PEM private key from file.

File path is constructed by joining base path in environment variable
``RSTUF_ONLINE_KEY_DIR`` with file in ``priv_key_uri``.
``ONLINE_KEY_DIR`` with file in ``priv_key_uri``.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You said this was done to allow re-using the Dynaconf setting name. Could you give an example where it's possible for ONLINE_KEY_DIR name to be reused?

I thought we want to try and keep all RSTUF related env variables with the RSTUF prefix.

PS: In your changes in docs/source/guide/Docker_README.md line 230 we describe with its full name RSTUF_ONLINE_KEY_DIR.

Copy link
Member

@kairoaraujo kairoaraujo Mar 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You said this was done to allow re-using the Dynaconf setting name. Could you give an example where it's possible for ONLINE_KEY_DIR name to be reused?

The ONLINE_KEY_DIR is not reused. Only the RSTUF_AWS* ones, which are AWS common env variables.

I thought we want to try and keep all RSTUF related env variables with the RSTUF prefix.

PS: In your changes in docs/source/guide/Docker_README.md line 230 we describe with its full name RSTUF_ONLINE_KEY_DIR.

Dynaconf uses RSTUF_, the prefix, to identify the environment variables it adds to the Dynaconf settings.
All are still RSTUF_, but only inside the function, the usage, the RSTUF_. Dynaconf automatically removes it.

Copy link
Member

@MVrachev MVrachev Mar 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then why do we rename it here to ONLINE_KEY_DIR only when the full name under which you can find it is RSTUF_ONLINE_KEY_DIR?

Copy link
Member

@kairoaraujo kairoaraujo Mar 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't get your question.

On the OS/Container we need to use the RSTUF :

root@3f79f6f8a407:/opt/repository-service-tuf-worker# env | grep RSTUF
RSTUF_AWS_ACCESS_KEY_ID=test
RSTUF_AWS_ENDPOINT_URL=http://localstack:4566
RSTUF_BROKER_SERVER=redis://redis/1
RSTUF_SQL_SERVER=postgresql://postgres:secret@db:5432
RSTUF_REDIS_SERVER=redis://redis
RSTUF_AWS_SECRET_ACCESS_KEY=test
RSTUF_STORAGE_BACKEND=AWSS3
RSTUF_AWS_STORAGE_BUCKET=tuf-metadata
RSTUF_AWS_DEFAULT_REGION=us-east-1

But, in the Dynaconf it is used only to map these as settings used by Dynaconf

>>> repository._settings.environ
environ({'RSTUF_AWS_ACCESS_KEY_ID': 'test', 'RSTUF_AWS_ENDPOINT_URL': 'http://localstack:4566', 'HOSTNAME': '3f79f6f8a407', 'PYTHON_VERSION': '3.10.12', 'RSTUF_BROKER_SERVER': 'redis://redis/1', 'RSTUF_SQL_SERVER': 'postgresql://postgres:secret@db:5432', 'PWD': '/opt/repository-service-tuf-worker', 'PYTHON_SETUPTOOLS_VERSION': '65.5.1', 'HOME': '/root', 'LANG': 'C.UTF-8', 'GPG_KEY': 'A035C8C19219BA821ECEA86B64E628F8D684696D', 'RSTUF_REDIS_SERVER': 'redis://redis', 'RSTUF_AWS_SECRET_ACCESS_KEY': 'test', 'TERM': 'xterm', 'SHLVL': '1', 'WORKER_ID': '3f79f6f8a407', 'RSTUF_STORAGE_BACKEND': 'AWSS3', 'PYTHON_PIP_VERSION': '23.0.1', 'PYTHON_GET_PIP_SHA256': '96461deced5c2a487ddc65207ec5a9cffeca0d34e7af7ea1afc470ff0d746207', 'PYTHON_GET_PIP_URL': 'https://github.com/pypa/get-pip/raw/0d8570dc44796f4369b652222cf176b3db6ac70e/public/get-pip.py', 'RSTUF_AWS_STORAGE_BUCKET': 'tuf-metadata', 'PATH': '/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'RSTUF_AWS_DEFAULT_REGION': 'us-east-1', 'RSTUF_WORKER_ID': '3f79f6f8a407', '_': '/usr/local/bin/python'})

But it is not the one you use to map in Dynaconf.

>>> repository._worker_settings.get('AWS_ENDPOINT_URL')
'http://localstack:4566'
>>> repository._worker_settings.get('RSTUF_AWS_ENDPOINT_URL')
>>> 

We use RSTUF_* in docs for the user (Docker_README) but not in the code itself.


NOTE: Make sure to use the secrets management service of your deployment
platform to protect your private key!

Example::

RSTUF_ONLINE_KEY_DIR (env) "/run/secrets"
ONLINE_KEY_DIR (env) "/run/secrets"
priv_key_uri (arg): "fn:foo"

File path: "/run/secrets/foo"

Raises:
KeyError: RSTUF_ONLINE_KEY_DIR environment variable not set
KeyError: ONLINE_KEY_DIR environment variable not set
OSError: file cannot be loaded
ValueError: uri has no file name, or private key cannot be decoded,
or type does not match public key
Expand All @@ -47,7 +47,7 @@ class FileNameSigner(CryptoSigner):
"""

SCHEME = "fn"
DIR_VAR = "RSTUF_ONLINE_KEY_DIR"
DIR_VAR = "ONLINE_KEY_DIR"

@classmethod
def from_priv_key_uri(
Expand Down Expand Up @@ -94,6 +94,16 @@ def isolated_env(env: dict[str, str]):
os.environ.update(orig_env)


# List of Dyanconf settings needed in the signer environment
_AMBIENT_SETTING_NAMES = [
"ONLINE_KEY_DIR",
"AWS_ACCESS_KEY_ID",
"AWS_SECRET_ACCESS_KEY",
"AWS_ENDPOINT_URL",
"AWS_DEFAULT_REGION",
]


class SignerStore:
"""Generic signer store.

Expand All @@ -104,8 +114,9 @@ class SignerStore:
def __init__(self, settings: Dynaconf):
# Cache known ambient settings
self._ambient_settings: dict[str, str] = {}
if key_dir := settings.get("ONLINE_KEY_DIR"):
self._ambient_settings[FileNameSigner.DIR_VAR] = key_dir
for name in _AMBIENT_SETTING_NAMES:
if value := settings.get(name):
self._ambient_settings[name] = value

# Cache KEYVAULT setting as fallback
self._vault = settings.get("KEYVAULT")
Expand Down
8 changes: 8 additions & 0 deletions tests/files/aws/init-kms.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
#!/usr/bin/env bash
awslocal kms create-key \
--key-spec RSA_4096 \
--key-usage SIGN_VERIFY

awslocal kms create-alias \
--alias-name alias/aws-test-key \
--target-key-id $(awslocal kms list-keys --query "Keys[0].KeyId" --output text)
Original file line number Diff line number Diff line change
Expand Up @@ -46,9 +46,9 @@ def test_full_init(self, mocked_boto3):
def test_configure(self, mocked_boto3):
test_settings = pretend.stub(
get=pretend.call_recorder(lambda *a: None),
AWSS3_STORAGE_BUCKET="bucket",
AWSS3_STORAGE_ACCESS_KEY="access_key",
AWSS3_STORAGE_SECRET_KEY="secret_key",
AWS_STORAGE_BUCKET="bucket",
AWS_ACCESS_KEY_ID="access_key",
AWS_SECRET_ACCESS_KEY="secret_key",
)

service = awss3.AWSS3.configure(test_settings)
Expand Down Expand Up @@ -91,15 +91,15 @@ def test_configure(self, mocked_boto3):

def test_configure_bucket_not_found(self, mocked_boto3):
def _fake_get(key: str) -> Optional[str]:
if key == "AWSS3_STORAGE_REGION":
if key == "AWS_DEFAULT_REGION":
return "region"
return None

test_settings = pretend.stub(
get=pretend.call_recorder(lambda a: _fake_get(a)),
AWSS3_STORAGE_BUCKET="nonexistent-bucket",
AWSS3_STORAGE_ACCESS_KEY="access_key",
AWSS3_STORAGE_SECRET_KEY="secret_key",
AWS_STORAGE_BUCKET="nonexistent-bucket",
AWS_ACCESS_KEY_ID="access_key",
AWS_SECRET_ACCESS_KEY="secret_key",
)

service = None
Expand Down Expand Up @@ -137,23 +137,23 @@ def test_settings(self, mocked_boto3):

assert service_settings == [
awss3.ServiceSettings(
names=["AWSS3_STORAGE_BUCKET"],
names=["AWS_STORAGE_BUCKET"],
required=True,
),
awss3.ServiceSettings(
names=["AWSS3_STORAGE_ACCESS_KEY"],
names=["AWS_ACCESS_KEY_ID"],
required=True,
),
awss3.ServiceSettings(
names=["AWSS3_STORAGE_SECRET_KEY"],
names=["AWS_SECRET_ACCESS_KEY"],
required=True,
),
awss3.ServiceSettings(
names=["AWSS3_STORAGE_REGION"],
names=["AWS_DEFAULT_REGION"],
required=False,
),
awss3.ServiceSettings(
names=["AWSS3_STORAGE_ENDPOINT_URL"],
names=["AWS_ENDPOINT_URL"],
required=False,
),
]
Expand Down Expand Up @@ -390,9 +390,9 @@ def test_get_DeserializationError(self, mocked_boto3):
def test_put(self, mocked_boto3):
test_settings = pretend.stub(
get=pretend.call_recorder(lambda *a: None),
AWSS3_STORAGE_BUCKET="bucket",
AWSS3_STORAGE_ACCESS_KEY="access_key",
AWSS3_STORAGE_SECRET_KEY="secret_key",
AWS_STORAGE_BUCKET="bucket",
AWS_ACCESS_KEY_ID="access_key",
AWS_SECRET_ACCESS_KEY="secret_key",
)

service = awss3.AWSS3.configure(test_settings)
Expand All @@ -410,9 +410,9 @@ def test_put(self, mocked_boto3):
def test_put_ClientErro(self, mocked_boto3):
test_settings = pretend.stub(
get=pretend.call_recorder(lambda *a: None),
AWSS3_STORAGE_BUCKET="bucket",
AWSS3_STORAGE_ACCESS_KEY="access_key",
AWSS3_STORAGE_SECRET_KEY="secret_key",
AWS_STORAGE_BUCKET="bucket",
AWS_ACCESS_KEY_ID="access_key",
AWS_SECRET_ACCESS_KEY="secret_key",
)
service = awss3.AWSS3.configure(test_settings)

Expand Down
25 changes: 24 additions & 1 deletion tests/unit/tuf_repository_service_worker/test_signer.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,19 +2,21 @@
#
# SPDX-License-Identifier: MIT

import os
from pathlib import Path
from unittest.mock import patch

import pytest
from dynaconf import Dynaconf
from pretend import stub
from securesystemslib.signer import CryptoSigner, Key
from securesystemslib.signer import AWSSigner, CryptoSigner, Key

from repository_service_tuf_worker.interfaces import IKeyVault
from repository_service_tuf_worker.signer import (
RSTUF_ONLINE_KEY_URI_FIELD,
FileNameSigner,
SignerStore,
isolated_env,
)

_FILES = Path(__file__).parent.parent.parent / "files"
Expand Down Expand Up @@ -131,3 +133,24 @@ def test_get_from_file_name_uri_no_envvar(self):

with patch.dict("os.environ", {}, clear=True), pytest.raises(KeyError):
store.get(fake_key)

@pytest.mark.skipif(
not os.environ.get("RSTUF_AWS_ENDPOINT_URL"), reason="No AWS endpoint"
)
def test_get_from_aws(self):
# Import test public key of given key type and keyid alias from AWS KMS
# - see tests/files/aws/init-kms.sh for how such a key is created
# - see tox.ini for how credentials etc. are passed via env vars
scheme = "rsassa-pss-sha256"
aws_keyid = "alias/aws-test-key"

settings = Dynaconf(envvar_prefix="RSTUF")
with isolated_env(settings.to_dict()):
uri, key = AWSSigner.import_(aws_keyid, scheme)

key.unrecognized_fields[RSTUF_ONLINE_KEY_URI_FIELD] = uri

# Load signer from AWS KMS
store = SignerStore(settings)
signer = store.get(key)
assert isinstance(signer, AWSSigner)
31 changes: 31 additions & 0 deletions tox.ini
Original file line number Diff line number Diff line change
Expand Up @@ -64,3 +64,34 @@ commands =
python =
3.10: py310,pep8,lint,requirements,test
3.11: py311,pep8,lint,requirements,test

[testenv:local-aws-kms]
deps =
-r{toxinidir}/requirements-dev.txt
localstack

allowlist_externals =
localstack
bash

setenv =
DATA_DIR = ./data-test
RSTUF_AWS_ACCESS_KEY_ID = test
RSTUF_AWS_SECRET_ACCESS_KEY = test
RSTUF_AWS_ENDPOINT_URL = http://localhost:4566/
RSTUF_AWS_DEFAULT_REGION = us-east-1

commands_pre =
# Start virtual AWS KMS
localstack start --detached
localstack wait

# Create signing key
bash {toxinidir}/tests/files/aws/init-kms.sh

commands =
python3 -m pytest tests/unit/tuf_repository_service_worker/test_signer.py -k test_get_from_aws

commands_post =
# Stop virtual AWS KMS
localstack stop