User-guide
Building C/C++
Jolt provides task base classes for building C/C++ projects. The classes are designed to be easily extended and customized to fit your specific needs. They generate Ninja build files which are then used to build your projects.
Basics
Below is an example of a library and a program. The library contains a function returning a message. The program calls this function and prints the message.
// lib/message.cpp
#include "message.h"
const char *message() {
return "Hello " RECIPIENT "!";
}
// program/main.cpp
#include <cstdlib>
#include <iostream>
#include "lib/message.h"
int main() {
std::cout << message() << std::endl;
return EXIT_SUCCESS;
}
To build the library and the program we use this Jolt recipe:
from jolt import Parameter
from jolt.plugins.ninja import CXXLibrary, CXXExecutable
class Message(CXXLibrary):
recipient = Parameter(default="world", help="Name of greeting recipient.")
headers = ["lib/message.h"]
sources = ["lib/message.cpp"]
macros = ['RECIPIENT="{recipient}"']
class HelloWorld(CXXExecutable):
requires = ["message"]
sources = ["program/main.cpp"]
Metadata
Jolt automatically configures include paths, link libraries, and other build
attributes for the HelloWorld
program based on metadata found in the artifact
of the Message
library task. In the example, the Message
library task relies
upon CXXLibrary.publish
to collect public headers and to export the required
metadata such as include paths and linking information. Customization is possible
by overriding the publish method as illustrated below. This implementation
of Message
is equivalent to the previous example.
class Message(CXXLibrary):
recipient = Parameter(default="world", help="Name of greeting recipient.")
sources = ["lib/message.*"]
macros = ['RECIPIENT="{recipient}"']
def publish(self, artifact, tools):
with tools.cwd("{outdir}"):
artifact.collect("*.a", "lib/")
artifact.cxxinfo.libpaths.append("lib")
artifact.collect("lib/*.h", "include/")
artifact.cxxinfo.incpaths.append("include")
The cxxinfo
artifact metadata can be used with other build systems too,
such as CMake, Meson and Autotools. It enables your Ninja tasks to stay oblivious to
whatever build system their dependencies use as long as binary compatibility
is guaranteed.
Parameterization
To support build customization based on parameters, several class decorators can be used to extend a task with conditional build attributes.
The first example uses a boolean debug parameter to disable optimizations and set a
preprocessor macro. The decorators enable Ninja to consider alternative attributes,
in addition to the default cxxflags
and macros
. The names of alternatives
are expanded with the values of parameters. When the debug parameter is assigned the
value true
, the cxxflags_debug_true
and macros_debug_true
attributes will
be matched and included in the build. If the debug parameter value is false,
no extra flags or macros will be included because there are no cxxflags_debug_false
and macros_debug_false
attributes in the class.
@ninja.attributes.cxxflags("cxxflags_debug_{debug}")
@ninja.attributes.macros("macros_debug_{debug}")
class Message(ninja.CXXLibrary):
debug = BooleanParameter()
cxxflags_debug_true = ["-g", "-Og"]
macros_debug_true = ["DEBUG"]
sources = ["lib/message.*"]
The next example includes source files conditionally.
@ninja.attributes.sources("sources_{os}")
class Message(ninja.CXXLibrary):
os = Parameter(values=["linux", "windows"])
sources = ["lib/*.cpp"]
sources_linux = ["lib/posix/*.cpp"]
sources_windows = ["lib/win32/*.cpp"]
Influence
The Ninja tasks automatically let the content of the listed header and source files
influence the task identity. However, sometimes source files may #include
headers which
are not listed. This is an error which may result in objects not being correctly
recompiled when the header changes. To protect against such errors, Jolt uses output
from the compiler to ensure that files included during a compilation are properly
influencing the task.
In the example below, the header message.h
is included from message.cpp
but it is
not listed in headers
, nor in sources
.
from jolt import *
from jolt.plugins.ninja import *
class Message(CXXLibrary):
sources = ["lib/message.cpp"]
This would be an error because Jolt no longer tracks the content of the message.h
header
and message.cpp
would not be properly recompiled. However, thanks to the builtin sanity
checks, trying to build this library would fail:
$ jolt build message
[ ERROR] Execution started (message b9961000)
[ STDOUT] [1/2] [CXX] message.cpp
[ STDOUT] [1/2] [AR] libmessage.a
[WARNING] Missing influence: message.h
[ ERROR] Execution failed after 00s (message b9961000)
[ ERROR] task is missing source influence (message)
The solution is to ensure that the header is covered by influence, either by listing
it in headers
or sources
, or by using an influence decorator such as
@influence.files
.
class Message(CXXLibrary):
sources = ["lib/message.h", "lib/message.cpp"]
from jolt import influence
@influence.files("lib/message.h")
class Message(CXXLibrary):
sources = ["lib/message.cpp"]
Headers from artifacts of dependencies are exempt from the sanity checks. They already influence the consuming task implicitly. This is also true for files in build directories.
Compiler
The default compiler is GCC on Linux and MSVC on Windows. To use a different compiler, set the toolchain attribute in the task class:
class HelloWorld(CXXExecutable):
sources = ["main.cpp"]
# Use a GNU toolchain instead of the default.
toolchain = ninja.GNUToolchain
class HelloWorld(CXXExecutable):
sources = ["main.cpp"]
# Use MSVC instead of the default.
toolchain = ninja.MSVCToolchain
The compiler can be further customized by settings different environment variables, either on the command line or through task artifact metadata.
Available environment variables:
Variable
Description
AR
Archiver.
AS
Assembler.
ASFLAGS
Assembler flags.
CC
C compiler.
CXX
C++ compiler.
CFLAGS
C compiler flags.
CXXFLAGS
C++ compiler flags.
LD
Linker.
LDFLAGS
Linker flags.
The environment variables can be set through an artifact’s environ
attribute.
Such metadata is automatically applied to consumer compilation tasks and take
precedence over the default environment variables.
In this example, the compiler
task sets environment variables for the
helloworld
task and makes it use the Clang compiler instead of the default.
class Compiler(Task):
def publish(self, artifact, tools):
artifact.environ.CC = "clang"
artifact.environ.CXX = "clang++"
artifact.environ.CFLAGS = "-g -Og"
artifact.environ.CXXFLAGS = "-g -Og"
class HelloWorld(CXXExecutable):
requires = ["compiler"]
sources = ["main.cpp"]
The example above can be extended to allow the user to override the compiler
from the command line. A variant
parameter can be used to select the compiler
from a list of predefined compilers. The publish
method in turn sets the
environment variables based on the value of the variant
parameter.
class Compiler(Task):
variant = Parameter("clang", values=["clang", "gcc"])
def publish(self, artifact, tools):
if self.variant == "clang":
artifact.environ.CC = "clang"
artifact.environ.CXX = "clang++"
if self.variant == "gcc":
artifact.environ.CC = "gcc"
artifact.environ.CXX = "g++"
The default variant
parameter value can be overridden from the command
line. For example, to build the helloworld
task using GCC:
$ jolt build helloworld -d compiler:variant=gcc
The -d compiler:variant=gcc
command line argument instructs Jolt to overide
the default value of the variant
parameter in the compiler
task. The new
value changes the identity hash of the compiler artifact which triggers a
rebuild of all depending tasks.
This approach with default valued parameters can also be used to enable other use-cases where you temporarily may want:
cross-compilation to different architectures
code coverage builds
builds with custom flags
Another similar approach is to pass the compiler as a parameter directly to the compilation task. We introduce a base class that can be shared by all our compilation tasks. It defines the compiler parameter and requires the compiler task. The parameter is then used to select the compiler from the command line:
@attributes.requires("requires_base")
class ExecutableBase(CXXExecutable):
abstract = True
compiler = Parameter("clang", values=["gcc", "clang"])
requires_base = ["compiler:variant={compiler}"]
class HelloWorld(ExecutableBase):
sources = ["main.cpp"]
$ jolt build helloworld:compiler=gcc
Custom Rules
Rules are used to transform files from one type to another. An example is the rule that compiles a C/C++ file to an object file. Ninja tasks can be extended with additional rules beyond those already builtin and the builtin rules may also be overridden.
To define a new rule for a type of file, assign a Rule object to an arbitrary attribute of the compilation task being defined. Below is an example where a rule has been added to generate Qt moc source files from headers.
class MyQtProject(CXXExecutable):
sources = ["myqtproject.h", "myqtproject.cpp"]
moc_rule = Rule(
command="moc -o $out $in",
infiles=[".h"],
outfiles=["{outdir}/{in_path}/{in_base}_moc.cpp"])
The moc rule applies to all .h
header files listed as sources,
i.e. myqtproject.h
. It takes the input header file and generates
a corresponding moc source file, myqtproject_moc.cpp
.
The moc source file will then automatically be fed to the builtin
compiler rule from which the output is an object file,
myqtproject_moc.o
.
Below, another example illustrates how to override one of the builtin compilation rules. The example also defines an environment variable that will be accessible to the rule.
class MyQtProject(CXXExecutable):
sources = ["myqtproject.h", "myqtproject.cpp"]
custom_cxxflags = EnvironmentVariable()
cxx_rule = Rule(
command="g++ $custom_cxxflags -o $out -c $in",
infiles=[".cpp"],
outfiles=["{outdir}/{in_path}/{in_base}{in_ext}.o"])
$ CUSTOM_CXXFLAGS=-DDEBUG jolt build myqtproject
Code Coverage
Ninja tasks have builtin support for code coverage instrumentation,
data collection and reporting. By setting the coverage
class
attribute to True
, instrumentation is enabled and coverage data
files will be generated when the executable is run. Currently, only
GCC/Clang toolchains are supported, not MSVC.
The coverage data can be automatically collected and processed into a plain-text or HTML reports with the help of task class decorators. The decorators rely on either Gcov (plain-text) or Lcov (HTML) to carry out the work.
Example:
from jolt import Runner, Task from jolt.plugins import ninja class Exe(ninja.CXXExecutable): """ Builds executable with code coverage instrumentation """ coverage = True sources = ["main.cpp"] @ninja.attributes.coverage_data() class Run(Runner): """ Runs executable and collects coverage data """ requires = ["exe"] @ninja.attributes.coverage_report_lcov() class LcovReport(Task): """ Generates HTML report from code coverage data """ name = "report/lcov" requires = ["run"] @ninja.attributes.coverage_report_gcov() class GcovReport(Task): """ Generates gcov report from code coverage data """ name = "report/gcov" requires = ["run"]
Conan Package Manager
The Conan package manager is an excellent way to quickly obtain prebuilt binaries of third-party libraries. It has been integrated into Jolt allowing you to seemlessly use Conan packages with your Jolt Ninja tasks.
In the example below, Conan is used to collect the Boost C++ libraries. Boost is then used in our example application. All build metadata is automatically configured.
from jolt.plugins.conan import Conan
class Boost(Conan):
requires = ["toolchain"]
packages = ["boost/1.74.0"]
class HelloWorld(CXXExecutable):
requires = ["toolchain", "boost"]
sources = ["src/main.cpp"]
With the toolchain as a dependency also for Boost, Conan will be able to fetch the appropriate binaries that match your toolchain. If no such binaries are available, Conan will build them for you.
Building with Chroot
Jolt can use chroot environments to provide a consistent build environment across different platforms. A chroot is typically faster to start and stop than a Docker container, but it is less isolated and secure. The chroot feature is not available on Windows.
The example task below creates a Docker image based on the Alpine Linux
distribution. The Dockerfile is defined in the task class. It can also
be defined in a separate file and pointed to by the dockerfile
attribute.
When built, the image is extracted into a directory tree that is published
into the task artifact.
from jolt import Chroot from jolt.plugins.docker import DockerImage class Alpine(DockerImage): dockerfile = ''' FROM alpine:3.7 ''' # Extract the image into a directory tree extract = True # Dont publish the image as an archive imagefile = None class AlpineChroot(Chroot): name = "alpine/chroot" # Task artifact that contains the chroot chroot = "alpine"
The ‘’AlpineChroot’’ class is a ‘’Chroot’’ resource that can be required by other tasks. The built directory tree chroot is automatically entered when a consumer task is executing commands. Only one chroot environment can be used by a task at a time. The workspace and the local artifact cache are mounted into the chroot environment and the current user is mapped to the chroot user.
from jolt import Task class Example(Task): """ Example task that uses the alpine/container to run a command. """ requires = ["alpine/chroot"] def run(self, deps, tools): tools.run("cat /etc/os-release")$ jolt build task[ INFO] Execution started (example d6058305) NAME="Alpine Linux" ID=alpine VERSION_ID=3.7.3 PRETTY_NAME="Alpine Linux v3.7" HOME_URL="http://alpinelinux.org" BUG_REPORT_URL="http://bugs.alpinelinux.org" [ INFO] Execution finished after 00s (example d6058305)
A more flexible alternative to using chroots as resources is to enter the chroot environment on demand directly in the consuming task as in the example below. A task can then use multiple chroot environments at different times.
from jolt import Task class Example2(Task): """ Example task that uses the alpine image to run a command. """ requires = ["alpine"] def run(self, deps, tools): with tools.chroot(deps["alpine"]): tools.run("cat /etc/os-release")
Building with Docker
Jolt can use Docker containers to provide a consistent build environment
across different platforms. The example task below creates a Docker image
based on the Alpine Linux distribution. The Dockerfile is defined in the
task class. It can also be defined in a separate file and pointed to by the
dockerfile
attribute.
from jolt.plugins.docker import DockerImage class Alpine(DockerImage): dockerfile = ''' FROM alpine:3.7 RUN apk add --no-cache python3 '''
The Docker image is built using the jolt build
command. The image is
tagged with the name of the task and its hash identity and saved to a file
that is published into the task artifact.
$ jolt build alpine
The image can then be used to create a container that is used as a chroot environment when executing tasks. The required image file is automatically loaded from the artifact cache when the container is created. The workspace and the local artifact cache are mounted into the container and the current user is mapped to the container user.
from jolt.plugins.docker import DockerContainer class AlpineContainer(DockerContainer): name = "alpine/container" # The image to use for the container. # This is either a task name or a full image name. image = "alpine" # Mark the container as a chroot container. # Consumer tasks will run all commands in the container. chroot = True
The container is used as a resource by other tasks which means that the container is automatically started and stopped when a consumer task is executed. Only one container can be used by a task at a time.
from jolt import Task class Example(Task): """ Example task that uses the alpine/container to run a command. """ requires = ["alpine/container"] def run(self, deps, tools): tools.run("cat /etc/os-release")$ jolt build task[ INFO] Execution started (example d6058305) NAME="Alpine Linux" ID=alpine VERSION_ID=3.7.3 PRETTY_NAME="Alpine Linux v3.7" HOME_URL="http://alpinelinux.org" BUG_REPORT_URL="http://bugs.alpinelinux.org" [ INFO] Execution finished after 00s (example d6058305)
Building with Nix
Jolt can use the Nix package manager to provision build environments with tools and dependencies for tasks to use. A list of required packages can be listed directly in the task class.
The example task below provisions three versions of the Go programming language and uses them to build three different versions of the same program for comparison.
# go.jolt from jolt import influence from jolt import Task @influence.files("main.go") class App(Task): def run(self, deps, tools): self.builddir = tools.builddir() for version in ["1_19", "1_20", "1_21"]: with tools.nixpkgs(packages=[f"go_{version}"], path=["nixpkgs=channel:nixos-23.11"]): tools.run("go build -o {builddir}/app.v{} main.go", version) def publish(self, artifact, tools): with tools.cwd(self.builddir): artifact.collect("app.*")
It is important to specify a Nix channel to use. The channel is a collection of Nix packages and is used to resolve package names to package paths and to fetch the packages from a binary cache. Without a channel, the Nix package manager may not be able to find the packages or the environment may not be deterministically reproducible.
It is also possible to create a Nix derivation in a separate file and use it in the task class:
# env.nix let nixpkgs = fetchTarball "https://github.com/NixOS/nixpkgs/tarball/nixos-23.11"; pkgs = import nixpkgs {}; in pkgs.mkShell { packages = [ pkgs.go ]; }
The derivation file is pointed to by the nixfile
attribute:
# derivation.jolt from jolt import influence from jolt import Task @influence.files("env.nix") @influence.files("main.go") class DerivationApp(Task): name = "app/derivation" def run(self, deps, tools): self.builddir = tools.builddir() # Build the go app using a Nix shell derivation with tools.nixpkgs(nixfile="env.nix"): tools.run("go build -o {builddir}/app.bin main.go") def publish(self, artifact, tools): with tools.cwd(self.builddir): artifact.collect("app.bin")
The Nix package manager is not available on Windows (except in WSL).
Container Images
The Jolt system is designed to be deployed as a set of containers. The following container images are available in Docker Hub:
Image
Description
Jolt client image.
The HTTP-based cache service image.
The dashboard web application image.
The scheduler application image.
The worker application image.
Deploying a Build Cluster
Jolt is designed to be deployed as a set of containers. To deploy a build cluster you typically use a container orchestration environment such as Kubernetes or Docker Swarm. See their respective documentation for installation instructions.
The different components of the build cluster are:
The Jolt scheduler, which is responsible for build and task scheduling.
The Jolt worker, which executes tasks as instructed by the scheduler.
The artifact cache, which is a HTTP server used to cache build artifacts.
The Jolt dashboard, which is a web application used to monitor the build cluster.
Each of the components is deployed as a separate container. Information about the images and their configuration environment variables can be found in Container Images
Adapting Task Definitions
Task classes may have to be adapted to work in a distributed execution environment. For example, Jolt will by default not transfer any workspace files to a worker. Such dependencies, typically source repositories, must be listed as task requirements. See the Jolt test suite for examples of how to do this.
Another common issue is that workers don’t have the required tools installed. Those tools should to be packaged by Jolt tasks and listed as requirements in order to be automatically provisioned on the workers. They can also be installed manually in the worker container image, but this is not recommended as it makes administration of the build cluster more difficult, especially when multiple different versions of the same tool are required.
Docker Swarm
Docker Swarm is an easy to use container orchestration tool which can be used to deploy and manage the Jolt build cluster. The below Docker stack yaml file will deploy a scheduler and two workers, as well as an artifact cache.
version: "3.5" services: cache: image: robrt/jolt-cache:latest environment: - JOLT_CACHE_INSECURE=true - JOLT_CACHE_MAX_SIZE=100GB - JOLT_CACHE_VERBOSITY=2 ports: - "8080:8080" volumes: - cache-http:/data dashboard: image: robrt/jolt-dashboard:latest ports: - "80:80" scheduler: image: robrt/jolt-scheduler:latest ports: - "9090:9090" worker: environment: - "JOLT_CACHE_URI=http://cache" image: robrt/jolt-worker:latest deploy: replicas: 2 configs: - source: worker.conf target: /root/.config/jolt/config volumes: - cache-node:/root/.cache/jolt - /etc/machine-id:/etc/machine-id configs: worker.conf: file: ./worker.conf volumes: cache-node: cache-http:
The Jolt workers are configured in the worker.conf
file:
[jolt] # The location of the local Jolt artifact cache. cachedir = /data/cache # The maximum size of the cache in bytes. # cachesize = 10G [scheduler] # URI of the Jolt scheduler inside the build cluster: # uri = tcp://scheduler:9090 [cache] # Location of the Jolt cache inside the build cluster: uri = http://cache:8080
The file configures the URIs of the scheduler service and the HTTP cache.
In the example, local Docker volumes are used as storage for artifacts.
In a real deployment, persistent volumes are recommended. The administrator
should also configure the maximum size allowed for the local cache in each
node with the jolt.cachesize
configuration key. If multiple workers are
deployed on the same node, the local cache may be shared between them in the
same directory. Fast SSD storage is recommended for the local cache and the
worker workspace.
To deploy the system into a swarm, run:
$ docker stack deploy -c jolt.yaml jolt
You can then scale up the the number of workers to a number suitable for your swarm:
$ docker service scale jolt_worker=10
Scaling is possible even with tasks in progress as long as they don’t cause any side effects. If a task is interrupted because the worker is terminated, the scheduler will redeliver the task execution request to another worker.
The newly deployed build cluster is utilized by configuring the Jolt client as follows:
[jolt] # Disable artifact upload in local builds. # This is overridden when running a distributed network build. # It can be overridden on the command line with the --upload flag. upload = false [cache] # Location of the Jolt cache service. # Replace 'localhost' with the hostname or IP of the cache in your deployment. uri = http://127.0.0.1:8080 [scheduler] # Location of the Jolt scheduler. # Replace 'localhost' with the hostname or IP of the scheduler your deployment. uri = tcp://127.0.0.1:9090
These configuration keys can also be set from command line:
$ jolt config scheduler.uri tcp://127.0.0.1 $ jolt config http.uri http://127.0.0.1
If your local machine is not part of the swarm you will need to replace
127.0.0.1
with the IP-address of one of the nodes in the swarm or,
preferably, a load balancing hostname.
To execute a task in the swarm, pass the -n/--network
flag to the build command:
$ jolt build -n <task>
Alternatively, if you are using a separate configuration file:
$ jolt -c client.conf build --network <task>
Kubernetes
Kubernetes is a more complex container orchestration tool which can be used to deploy and manage the Jolt build cluster. The below Kubernetes deployment yaml file will deploy a scheduler, two workers, an artifact cache as well as the dashboard. Notice inline ‘’FIXME’’ comments in the yaml file that need to or should be replaced with actual values.
apiVersion: v1 kind: ConfigMap metadata: name: jolt-config data: scheduler.yaml: |- listen_grpc: - tcp://:9090 listen_http: - tcp://:8080 public_http: - http://jolt-scheduler:8080 logstash: size: 10GB storage: disk path: /root/logstash dashboard: uri: "http://jolt-dashboard" worker.conf: |- # This configuration is used by the Jolt application # that is installed and executed by worker containers. [jolt] # The location of the local Jolt artifact cache. cachedir = /data/cache # The maximum size of the cache in bytes. # FIXME: Replace with an actual value. cachesize = 100G [cache] # Location of the Jolt cache inside the build cluster: uri = http://jolt-cache:8080 [scheduler] # URI of the Jolt scheduler inside the build cluster: uri = tcp://jolt-scheduler:9090 --- apiVersion: apps/v1 kind: Deployment metadata: name: jolt-cache spec: replicas: 1 selector: matchLabels: app: jolt-cache template: metadata: annotations: prometheus.io/scrape: "true" prometheus.io/path: "/metrics" prometheus.io/port: "8080" labels: app: jolt-cache spec: containers: - name: jolt-cache env: - name: JOLT_CACHE_INSECURE value: "true" - name: JOLT_CACHE_MAX_SIZE # FIXME: Replace with an actual value. value: "100GiB" - name: JOLT_CACHE_VERBOSITY value: "2" image: robrt/jolt-cache:latest ports: - containerPort: 8080 volumeMounts: - name: jolt-cache mountPath: /data volumes: - name: jolt-cache # FIXME: Replace with an actual persistent volume claim. # # This volume is used to store task artifacts. # An emptyDir volume is used as an example, but # the volume should typically be backed by a # persistent volume claim that survives pod restarts. emptyDir: {} --- apiVersion: apps/v1 kind: Deployment metadata: name: jolt-dashboard spec: replicas: 1 selector: matchLabels: app: jolt-dashboard template: metadata: labels: app: jolt-dashboard spec: containers: - name: jolt-dashboard image: robrt/jolt-dashboard:latest ports: - containerPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: jolt-scheduler spec: replicas: 1 selector: matchLabels: app: jolt-scheduler template: metadata: annotations: prometheus.io/scrape: "true" prometheus.io/path: "/metrics" prometheus.io/port: "8080" labels: app: jolt-scheduler spec: containers: - name: jolt-scheduler image: robrt/jolt-scheduler:latest ports: - containerPort: 8080 volumeMounts: - name: jolt-config mountPath: /etc/jolt volumes: - name: jolt-config configMap: name: jolt-config items: - key: scheduler.yaml path: scheduler.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: jolt-worker spec: replicas: 2 selector: matchLabels: app: jolt-worker template: metadata: labels: app: jolt-worker spec: containers: - name: jolt-worker env: - name: JOLT_CACHE_DIR value: /data/cache - name: JOLT_SCHEDULER_URI value: "tcp://jolt-scheduler:9090" image: robrt/jolt-worker:latest ports: - containerPort: 8080 volumeMounts: - name: jolt-cache mountPath: /data/cache - name: jolt-config mountPath: /root/.config/jolt - name: jolt-ws mountPath: /data/ws - name: machine-id mountPath: /etc/machine-id volumes: - name: machine-id hostPath: path: /etc/machine-id - name: jolt-config configMap: name: jolt-config items: - key: worker.conf path: config - name: jolt-cache # FIXME: Replace with an actual hostPath volume. # # This volume is used to store the artifacts locally in the worker. # An emptyDir volume is used as an example, but # the volume should typically be backed by a hostPath volume # with a path that is shared between all workers on the node. # # hostPath: # path: /data/cache # emptyDir: {} - name: jolt-ws emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: jolt-cache spec: selector: app: jolt-cache ports: - name: "http" port: 8080 targetPort: 8080 type: LoadBalancer --- apiVersion: v1 kind: Service metadata: name: jolt-dashboard spec: selector: app: jolt-dashboard ports: - name: "http" port: 80 targetPort: 80 type: LoadBalancer --- apiVersion: v1 kind: Service metadata: name: jolt-scheduler spec: selector: app: jolt-scheduler ports: - name: "http" port: 8080 targetPort: 8080 - name: "grpc" port: 9090 targetPort: 9090 type: LoadBalancer --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: jolt-cache spec: rules: # FIXME: # Replace this with the actual domain name of the cache. - host: cache.jolt.domain http: paths: - path: / pathType: Prefix backend: service: name: jolt-cache port: number: 8080 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: jolt-dashboard spec: rules: # FIXME: # Replace this with the actual domain name of the dashboard. - host: dashboard.jolt.domain http: paths: - path: / pathType: Prefix backend: service: name: jolt-dashboard port: number: 8080 ---
To deploy the system into a Kubernetes cluster, run:
$ kubectl apply -f jolt.yaml
You can then scale up the the number of workers to a number suitable for your cluster:
$ kubectl scale deployment jolt-worker --replicas=10
Scaling is possible even with tasks in progress as long as they don’t cause any side effects. If a task is interrupted because the worker is terminated, the scheduler will redeliver the task execution request to another worker.
The newly deployed build cluster is utilized by configuring the Jolt client as follows:
[jolt] # Disable artifact upload in local builds. # This is overridden when running a distributed network build. # It can be overridden on the command line with the --upload flag. upload = false [cache] # Location of the Jolt cache service. # Replace 'localhost' with the hostname or IP of the cache in your deployment. uri = http://<cache-host>:8080 [scheduler] # Location of the Jolt scheduler. # Replace 'localhost' with the hostname or IP of the scheduler your deployment. uri = tcp://<scheduler-host>:9090 http_uri = http://<scheduler-host>:8080
The placeholder hosts should be replaced with the actual hostnames or IPs of the services in the Kubernetes cluster. The services are typically exposed through a load balancer and/or an ingress controller. Both methods are exemplified in the yaml file, but may not work out of the box in all Kubernetes installations. Run the following command to find the ExternalIP addresses of the services:
$ kubectl get services jolt-cache jolt-scheduler
The client configuration keys can also be set from command line:
$ jolt config scheduler.uri tcp://<scheduler-service-name-or-ip>:<port> $ jolt config http.uri http://<cache-service-name-or-ip>:<port>
To execute a task in the cluster, pass the -n/--network
flag to the build command:
$ jolt build -n <task>
Alternatively, if you are using a separate configuration file:
$ jolt -c client.conf build --network <task>