Skip to content

Kubernetes

You can run Replicator in a Kubernetes cluster in the same cloud as your managed Kurrent Cloud cluster. The Kubernetes cluster workloads must be able to reach the managed KurrentDB cluster. Usually, with a proper VPC (or VN) peering between your VPC and Kurrent Cloud network, it works without issues.

We provide guidelines about connecting managed Kubernetes clusters.

The easiest way to deploy Replicator to Kubernetes is by using a provided Helm chart. On this page, you find detailed instructions for using the Replicator Helm chart.

Ensure you have Helm 3 installed on your machine:

Terminal window
$ helm version
version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}

If you don’t have Helm, following their installation guide.

Add the Replicator repository:

Terminal window
helm repo add kurrent-replicator https://kurrent-io.github.io/replicator
helm repo update

Configure the Replicator options using a new values.yml file:

replicator:
reader:
connectionString: "GossipSeeds=node1.esdb.local:2113,node2.esdb.local:2113,node3.esdb.local:2113; HeartBeatTimeout=500; UseSslConnection=False; DefaultUserCredentials=admin:changeit;"
sink:
connectionString: "esdb://admin:changeit@[cloudclusterid].mesdb.eventstore.cloud:2113"
partitionCount: 6
filters:
- type: eventType
include: "."
exclude: "((Bad|Wrong)\w+Event)"
transform:
type: http
config: "http://transform.somenamespace.svc:5000"
prometheus:
metrics: true
operator: true

Available options are:

OptionDescriptionDefault
replicator.reader.connectionStringConnection string for the source cluster or instancenil
replicator.reader.protocolReader protocoltcp
replicator.reader.pageSizeReader page size (only applicable for TCP protocol4096
replicator.sink.connectionStringConnection string for the target cluster or instancenil
replicator.sink.protocolWriter protocolgrpc
replicator.sink.partitionCountNumber of partitioned concurrent writers1
replicator.sink.partitionerCustom JavaScript partitionernull
replicator.sink.bufferSizeSize of the sink buffer, in events1000
replicator.scavengeEnable real-time scavengetrue
replicator.runContinuouslySet to false if you want Replicator to stop when it reaches the end of $all stream.true
replicator.filtersAdd one or more of provided filters[]
replicator.transformConfigure the event transformation
replicator.transform.bufferSizeSize of the prepare buffer (filtering and transformations), in events1000
prometheus.metricsEnable annotations for Prometheusfalse
prometheus.operatorCreate PodMonitor custom resource for Prometheus Operatorfalse
resources.requests.cpuCPU request250m
resources.requests.memoryMemory request512Mi
resources.limits.cpuCPU limit1
resources.limits.memoryMemory limit1Gi
pvc.storageClassPersistent volume storage class namenull
terminationGracePeriodSecondsTimeout for the workload graceful shutdown, it must be long enough for the sink buffer to flush300
jsConfigMapsList of existing config maps to be used as JS code files (for JS transform, for example){}

You should at least provide both connection strings and ensure that workloads in your Kubernetes cluster can reach both the source and the target EventStoreDB clusters or instances.

Follow the documentation to configure a JavaScript transform in your values.yml file.

Then append the following option to your helm install command:

Terminal window
--set-file transformJs=./transform.js

Follow the documentation to configure a custom partitioner in your values.yml file.

Then append the following option to your helm install command:

Terminal window
--set-file partitionerJs=./partitioner.js

When you have the values.yml file complete, deploy the release using Helm. Remember to set the current kubectl context to the cluster where you are deploying to.

Terminal window
helm install kurrent-replicator \
kurrent-replicator/replicator \
--values values.yml \
--namespace kurrent-replicator

You can choose another namespace, the namespace must exist before doing a deployment.

The replication starts immediately after the deployment, assuming that all the connection strings are correct, and the Replicator workload has network access to both source and sink EventStoreDB instances.