tractus-x-umbrella

OverarchingRelease

Umbrella Chart

This umbrella chart provides a basis for running end-to-end tests or creating a sandbox environment of the Catena-X automotive dataspace network consisting of Eclipse Tractus-X OSS components.

The Chart aims for a completely automated setup of a fully functional network, that does not require manual setup steps.

Note for R24.05

The versions of integrated components correspond to the overarching Release 24.05.

:warning: The 24.05 Release does not include a Managed Identity Wallet (MIW) - aka the FOSS Wallet of Tractus-X - as it was not yet able to cover functionalities required for the Self-Sovereign Identity Flow introduced with R24.05. To test and ship R24.05, a commercial solution was used: the Decentralized Identity Management (DIM) Wallet. To cover the wallet functionalities in the E2E Adopter Journey Data exchange the iatp-mock was added to the umbrella helm chart, please also see Precondition for IATP Mock. For the E2E Adopter Journey Portal, there isn’t a mock available yet to cover the wallet functionalities.

Usage

Running this helm chart requires a kubernetes cluster (>1.24.x), it’s recommended to run it on Minikube. Assuming you have a running cluster and your kubectl context is set to that cluster, you can use the following instructions to install the chart as umbrella release.

Note

In its current state of development, this chart as well as the following installation guide have been tested on Linux and Mac.

Linux is the preferred platform to install this chart on, as the network setup with Minikube is very straightforward on Linux.

We are working on testing the chart’s reliability on Windows as well and updating the installation guide accordingly.

Cluster setup

Recommendations for resources | CPU(cores) | Memory(GB) | | :——–: | :——–: | | 4 | 6 |

Use the dashboard provided by Minikube or a tool like OpenLens to get an overview about the deployed components:

minikube dashboard

Cluster setup on Linux & Mac

minikube start --cpus=4 --memory 6gb

Cluster setup on Windows

For DNS resolution to work you need to either use --driver=hyperv option which requires administrator privileges:

minikube start --cpus=4 --memory 6gb --driver=hyperv

or use the native Kubernetes Cluster in Docker Desktop as well with a manual ingress setup:

# 1. Enable Kubernetes unter Settings > Kubernetes > Enable Kubernetes
# 2. Install an NGINX Ingress Controller
helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace
# 3. Skip the minikube addons and assume 127.0.0.1 for Cluster IP 

:warning: The rest of the tutorial assumes a minikube cluster, however.

Network setup

In order to enable the local access via ingress, use the according addon for Minikube:

minikube addons enable ingress

Make sure that the DNS resolution for the hosts is in place:

minikube addons enable ingress-dns

And execute installation step 3 Add the minikube ip as a DNS server for your OS:

Network setup on Linux

On Linux, you should identify your domain name resolver configuration, and update its configuration accordingly. To that end, look at the first lines of /etc/resolv.conf:

Start minikube, and apply the configuration below matching your system configuration.

Linux OS with resolvconf

Update the file /etc/resolvconf/resolv.conf.d/base to have the following contents.

search test
nameserver 192.168.99.169
timeout 5

Replace 192.168.99.169 with the output of minikube ip.

If your Linux OS uses systemctl, run the following commands.

sudo resolvconf -u
systemctl disable --now resolvconf.service

See https://linux.die.net/man/5/resolver

Linux OS with NetworkManager

NetworkManager can run integrated caching DNS server - dnsmasq plugin and can be configured to use separate nameservers per domain.

Edit /etc/NetworkManager/NetworkManager.conf and enable dns=dnsmasq by adding:

[main]
dns=dnsmasq

Also see dns= in NetworkManager.conf.

Configure dnsmasq to handle domain names ending with .test:

sudo mkdir -p /etc/NetworkManager/dnsmasq.d/
echo "server=/test/$(minikube ip)" | sudo tee /etc/NetworkManager/dnsmasq.d/minikube.conf

Restart Network Manager:

systemctl restart NetworkManager.service

Ensure your /etc/resolv.conf contains only single nameserver:

cat /etc/resolv.conf | grep nameserver
nameserver 127.0.0.1
Linux OS with systemd-resolved

Run the following commands to add the minikube DNS for .test domains:

sudo mkdir -p /etc/systemd/resolved.conf.d
sudo tee /etc/systemd/resolved.conf.d/minikube.conf << EOF
[Resolve]
DNS=$(minikube ip)
Domains=~test
EOF
sudo systemctl restart systemd-resolved

If you still face DNS issues, add the hosts to your /etc/hosts file.

Network setup on Mac

Create a file in /etc/resolver/minikube-test with the following contents.

domain tx.test
nameserver 192.168.49.2
search_order 1
timeout 5

Replace 192.168.49.2 with your minikube ip if it differs.

To find out the IP address of your Minikube execute:

minikube ip

If you still face DNS issues, add the hosts to your /etc/hosts file.

Additional network setup for Mac

Install and start Docker Mac Net Connect.

We also recommend to execute the usage example after install to check proper setup.

Network setup on Windows

Open Powershell as Administrator and execute the following.

Add-DnsClientNrptRule -Namespace ".test" -NameServers "$(minikube ip)"

The following will remove any matching rules before creating a new one. This is useful for updating the minikube ip.

Get-DnsClientNrptRule | Where-Object {$_.Namespace -eq '.test'} | Remove-DnsClientNrptRule -Force; Add-DnsClientNrptRule -Namespace ".test" -NameServers "$(minikube ip)"

If you still face DNS issues, add the hosts to your C:\Windows\System32\drivers\etc\hosts file.

Hosts

Collection of hosts to be added to the /etc/hosts (Linux and Mac) or the C:\Windows\System32\drivers\etc\hosts (Windows) file, in case Network setup#Linux, Network setup#Mac or Network setup#Windows doesn’t work:

192.168.49.2    centralidp.tx.test
192.168.49.2    sharedidp.tx.test
192.168.49.2    portal.tx.test
192.168.49.2    portal-backend.tx.test
192.168.49.2    semantics.tx.test
192.168.49.2    sdfactory.tx.test
192.168.49.2    ssi-credential-issuer.tx.test
192.168.49.2    dataconsumer-1-dataplane.tx.test
192.168.49.2    dataconsumer-1-controlplane.tx.test
192.168.49.2    dataprovider-dataplane.tx.test
192.168.49.2    dataconsumer-2-dataplane.tx.test
192.168.49.2    dataconsumer-2-controlplane.tx.test
192.168.49.2    bdrs-server.tx.test
192.168.49.2    iatpmock.tx.test
192.168.49.2    business-partners.tx.test
192.168.49.2    pgadmin4.tx.test

Replace 192.168.49.2 with your minikube ip if it differs.

To find out the IP address of your Minikube execute:

minikube ip

Install

Select a subset of components which are designed to integrate with each other for a certain functional use case and enable those at install.

The currently available components are following:

:warning: Note

Please be aware of Note for R24.05

Use released chart

helm repo add tractusx-dev https://eclipse-tractusx.github.io/charts/dev

:grey_question: Command explanation

helm install is used to install a chart in Kubernetes using Helm.

--set COMPONENT_1.enabled=true,COMPONENT_2.enabled=true Enables the components by setting their respective enabled values to true.

umbrella is the release name for the chart.

tractusx-dev/umbrella specifies the chart to install, with tractusx-dev being the repository name and umbrella being the chart name.

--namespace umbrella specifies the namespace in which to install the chart.

--create-namespace create a namespace with the name umbrella.

Option 1

Install with your chosen components enabled:

helm install \
  --set COMPONENT_1.enabled=true,COMPONENT_2.enabled=true,COMPONENT_3.enabled=true \
  umbrella tractusx-dev/umbrella \
  --namespace umbrella \
  --create-namespace
Option 2

Choose to install one of the predefined subsets (currently in focus of the E2E Adopter Journey):

Data Exchange Subset

helm install \
  --set centralidp.enabled=true,managed-identity-wallet.enabled=true,dataconsumerOne.enabled=true,tx-data-provider.enabled=true \
  umbrella tractusx-dev/umbrella \
  --namespace umbrella \
  --create-namespace

Optional

Enable dataconsumerTwo at upgrade:

helm install \
  --set centralidp.enabled=true,managed-identity-wallet.enabled=true,dataconsumerOne.enabled=true,tx-data-provider.enabled=true,dataconsumerTwo.enabled=true \
  umbrella tractusx-dev/umbrella \
  --namespace umbrella

Portal Subset

helm install \
  --set portal.enabled=true,centralidp.enabled=true,sharedidp.enabled=true \
  umbrella tractusx-dev/umbrella \
  --namespace umbrella \
  --create-namespace

BPDM Subset

helm install \
  --set bpdm.enabled=true,centralidp.enabled=true \
  umbrella tractusx-dev/umbrella \
  --namespace umbrella \
  --create-namespace

To set your own configuration and secret values, install the helm chart with your own values file:

helm install -f your-values.yaml umbrella tractusx-dev/umbrella --namespace umbrella --create-namespace

Use local repository

Make sure to clone the tractus-x-umbrella repository beforehand.

Then navigate to the chart directory:

cd charts/umbrella/

Download the dependencies of the tx-data-provder subchart:

helm dependency update ../tx-data-provider

Download the chart dependencies of the umbrella helm chart:

helm dependency update

:grey_question: Command explanation

helm install is used to install a Helm chart.

-f your-values.yaml -f values-*.yaml specifies the values file to use for configuration.

umbrella is the release name for the Helm chart.

. specifies the path to the chart directory.

--namespace umbrella specifies the namespace in which to install the chart.

--create-namespace create a namespace with the name umbrella.

Option 1

Install your chosen components by having them enabled in a your-values.yaml file:

helm install -f your-values.yaml umbrella . --namespace umbrella --create-namespace

In general, all your specific configuration and secret values should be set by installing with an own values file.

Option 2

Choose to install one of the predefined subsets (currently in focus of the E2E Adopter Journey):

Data Exchange Subset

helm install -f values-adopter-data-exchange.yaml umbrella . --namespace umbrella --create-namespace

Optional

Enable dataconsumerTwo by setting it true in values-adopter-data-exchange.yaml and then executing an upgrade:

dataconsumerTwo:
  enabled: true
helm upgrade -f values-adopter-data-exchange.yaml umbrella . --namespace umbrella

Portal Subset

helm install -f values-adopter-portal.yaml umbrella . --namespace umbrella --create-namespace

E2E Adopter Journeys

Data exchange

:warning: Please be aware of Note for R24.05

Involved components:

EDC, MIW, DTR, Vault (data provider and consumer in tx-data-provider), CentralIdP.

TBD.

Get to know the Portal

Perform first login and send out an invitation to a company to join the network (SMTP account required to be configured in custom values.yaml file).

Proceed with the login to the http://portal.tx.test to verify that everything is setup as expected.

Credentials to log into the initial example realm (CX-Operator):

cx-operator@tx.test
tractusx-umbr3lla!
%%{
  init: {
    'flowchart': { 'diagramPadding': '10', 'wrappingWidth': '', 'nodeSpacing': '', 'rankSpacing':'', 'titleTopMargin':'10', 'curve':'basis'},
    'theme': 'base',
    'themeVariables': {
      'primaryColor': '#b3cb2d',
      'primaryBorderColor': '#ffa600',
      'lineColor': '#ffa600',
      'tertiaryColor': '#fff'
    }
  }
}%%
        graph TD
          classDef stroke stroke-width:2px
          classDef addext fill:#4cb5f5,stroke:#b7b8b6,stroke-width:2px
          iam1(IAM: centralidp Keycloak):::stroke
          iam2(IAM: sharedidp Keycloak):::stroke
          portal(Portal):::stroke
          subgraph Login Flow
              iam1 --- portal & iam2
            end
          linkStyle 0,1 stroke:lightblue

The relevant hosts are the following:

In case that you have TLS enabled (see Self-signed TLS setup (Optional)), make sure to accept the risk of the self-signed certificates for all the hosts before performing the first login:

:warning: Please be aware of Note for R24.05

Note for onboarding process

Since the onboarding process requires the Clearinghouse to work properly, but ClearingHouse currently isn’t available as a FOSS application you can skip the step with the following SQL Script which must be executed against the portal database.

WITH applications AS (
    SELECT distinct ca.id as Id, ca.checklist_process_id as ChecklistId
    FROM portal.company_applications as ca
             JOIN portal.application_checklist as ac ON ca.id = ac.application_id
    WHERE 
      ca.application_status_id = 7 
    AND ac.application_checklist_entry_type_id = 6
    AND (ac.application_checklist_entry_status_id = 4 OR ac.application_checklist_entry_status_id = 1)
),
updated AS (
 UPDATE portal.application_checklist
     SET application_checklist_entry_status_id = 3
     WHERE application_id IN (SELECT Id FROM applications)
     RETURNING *
)
INSERT INTO portal.process_steps (id, process_step_type_id, process_step_status_id, date_created, date_last_changed, process_id, message)
SELECT gen_random_uuid(), 12, 1, now(), NULL, a.ChecklistId, NULL
FROM applications a;

Uninstall

To teardown your setup, run:

helm delete umbrella --namespace umbrella

:warning:

If persistance for one or more components is enabled, the persistent volume claims (PVCs) and connected persistent volumes (PVs) need to be removed manually even if you deleted the release from the cluster.

Ingresses

Currently enabled ingresses:

Database Access

This chart also contains a pgadmin4 instance for easy access to the deployed Postgres databases which are only available from within the Kubernetes cluster.

pgadmin4 is by default enabled with in the predefined subsets for data exchange and portal.

Address: pgadmin4.tx.test

Credentials to login into pgadmin4:

pgadmin4@txtest.org
tractusxpgdamin4

The database server connections need to be added manually to pgadmin4.

Default username for all connections:

postgres

Default user for all connections:

5432

In the following some of the available connections:

Host:

portal-backend-postgresql

Password:

dbpasswordportal

Host:

umbrella-centralidp-postgresql

Password:

dbpasswordcentralidp

Host:

umbrella-sharedidp-postgresql

Password:

dbpasswordsharedidp

Host:

umbrella-miw-postgres

Password:

dbpasswordmiw

Host:

umbrella-dataprovider-db

Password:

dbpasswordtxdataprovider

Host:

umbrella-dataconsumer-1-db

Password:

dbpassworddataconsumerone

Host:

umbrella-dataconsumer-2-db

Password:

dbpassworddataconsumertwo

Host:

umbrella-bpdm-postgres

Password:

dbpasswordbpdm

Keycloak Admin Console

Access to admin consoles:

Default username for centralidp and sharedidp:

admin

Password centralidp:

adminconsolepwcentralidp

Password sharedidp:

adminconsolepwsharedidp

Seeding

See Overall Seeding.

Self-signed TLS setup (Optional)

Some of the components are prepared to be configured with TLS enabled (see “uncomment the following line for tls” comments in values.yaml).

If you’d like to make use of that, make sure to to execute this step beforehand.

Install cert-manager chart in the same namespace where the umbrella chart will be located.

helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
  cert-manager jetstack/cert-manager \
  --namespace umbrella \
  --create-namespace \
  --version v1.14.4 \
  --set installCRDs=true

Configure the self-signed certificate and issuer to be used by the ingress resources.

If you have the repository checked out you can run:

kubectl apply -f ./charts/umbrella/cluster-issuer.yaml

or otherwise you can run:

kubectl apply -f - <<EOF
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-issuer
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: my-selfsigned-ca
  namespace: umbrella
spec:
  isCA: true
  commonName: tx.test
  secretName: root-secret
  privateKey:
    algorithm: RSA
    size: 2048
  issuerRef:
    name: selfsigned-issuer
    kind: ClusterIssuer
    group: cert-manager.io
  subject:
    organizations:
      - CX
    countries:
      - DE
    provinces:
      - Some-State
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: my-ca-issuer
spec:
  ca:
    secretName: root-secret
EOF

See cert-manager self-signed for reference.

Precondition for Semantic Hub

In case of enabling semantic-hub the fuseki docker image must be built. Build fuseki docker image by following the below steps:

Precondition for IATP Mock

In case of enabling iatpmock (e.g. by using values-adopter-data-exchange.yaml), the iatp-mock docker image must be built first:

docker build iatp-mock/ -t tractusx/iatp-mock:testing --platform linux/amd64

How to contribute

Before contributing, make sure, you read and understand our contributing guidelines. We appreciate every contribution, be it bug reports, feature requests, test automation or enhancements to the Chart(s), but please keep the following in mind: