Originally posted on the OpenShift 4.3 blog by Eduardo Arango
Introduction
Using general available packages (in the form of container images) from an official source or a certified provider comes with a big caveat in relation to performance-sensitive workloads.
These packages may provide ABI compatibility, but they are not optimized for our specialized hardware (like GPUs or high-performance NICs), nor our CPU chip architecture. The best way to address this is to compile your packages (build your images) on your own deployment.
OpenShift provides a way to seamlessly build images based on defined events called BUILDS. A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process.
The missing part to building hardware-specific images is to orchestrate the build process over the different available resources. In this post you will learn about the Node Feature Discovery (NFD) operator and how to tie it to OpenShift builds to have a hardware-specific image build.
The first part describes the NFD operator and how you can use it to manage the detection of hardware features in the cluster. The second part describes how to create an imageStream from a GitHub webhook and how to use the information from the NFD operator to schedule node-specific builds. The third part presents a sample app to test what you have learned.
The Node Feature Discovery Operator
The Node Feature Discovery Operator (NFD) manages the detection of hardware features and configuration in an OpenShift cluster by labeling the nodes with hardware-specific information. NFD will label the host with node-specific attributes, like PCI cards, kernel, or OS version, and many more. See (Upstream NFD) for more info.
The NFD operator can be found on the Operator Hub by searching for “Node Feature Discovery”:
After following the install steps, you can go to “Installed Operators” in the OpenShift cluster and see:
Inside, a card instructs you to create an instance:
Click on “Create Instance” to get help from the OpenShift web console, which will auto-generate the needed YAML file and allow you to create the object from the console.
Once the NFD operator is deployed, you can go to a node dashboard and check all the Node_labels generated by the operator. Here is a sample excerpt of NFD labels applied to the node:
By reading the generated labels, you can understand hardware information of the OpenShift node; for example, we are running an amd64 architecture with multithreading enabled: (“beta.kubernetes.io/arch=amd64”, “feature.node.kubernetes.io/cpu-hardware_multithreading=true”)
Defining a BuildConfig
BuildConfig is a powerful tool in OpenShift. OpenShift Container Platform uses Kubernetes by creating containers from build images and pushing them to a container image registry.
The first step is to create a specific namespace to allocate the builds:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: multiarch-build
labels:
openshift.io/cluster-monitoring: "true"
```
For this example, you are pointing the builds to a repository on GitHub. First, you need to generate a secret for the generated GitHub hook:
```yaml
kind: Secret
apiVersion: v1
metadata:
name: arch-dummy-github-webhook-secret
namespace: multiarch-build
data:
WebHookSecretKey: bXVsdGlhcmNoLWJ1aWxk
---
kind: Secret
apiVersion: v1
metadata:
name: arch-dummy-generic-webhook-secret
namespace: multiarch-build
data:
WebHookSecretKey: bXVsdGlhcmNoLWJ1aWxk
```
With the namespace and secret in place, you can now create the imageStream and BuildConfig to continuously watch for user-defined triggers to keep the image up to date. Image streams are part of the OpenShift extension APIs. Image streams are named references to container images. The OpenShift extension resources reference container images indirectly, using image streams.
The following YAML files can be generated via the OpenShift Developer web console. Once you have a generated imageStream and BuildConfig YAML, you need to make sure they look like the following:
```yaml
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
labels:
app: arch-dummy
name: arch-dummy
spec: {}
---
kind: BuildConfig
apiVersion: build.openshift.io/v1
metadata:
name: arch-dummy
namespace: multiarch-build
selfLink: >-
/apis/build.openshift.io/v1/namespaces/multiarch-build/buildconfigs/arch-dummy
labels:
app: arch-dummy
app.kubernetes.io/component: arch-dummy
app.kubernetes.io/instance: arch-dummy
app.kubernetes.io/part-of: arch-dummy-app
annotations:
app.openshift.io/vcs-ref: master
app.openshift.io/vcs-uri: 'https://github.com/ArangoGutierrez/Arch-Dummy'
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
beta.kubernetes.io/arch=amd64
resources:
requests:
cpu: "100m"
memory: "256Mi"
output:
to:
kind: ImageStreamTag
name: 'arch-dummy:latest'
resources: {}
successfulBuildsHistoryLimit: 3
failedBuildsHistoryLimit: 3
strategy:
type: Docker
dockerStrategy:
dockerfilePath: build/Dockerfile
postCommit: {}
source:
type: Git
git:
uri: 'https://github.com/ArangoGutierrez/Arch-Dummy'
ref: master
contextDir: /
triggers:
- type: ImageChange
ImageChange: {}
- type: GitHub
github:
secretReference:
name: arch-dummy-github-webhook-secret
- type: ConfigChange
runPolicy: Parallel
```
There are three lines worth noting in the above YAML (not auto-generated via the Developer web console), where you leverage the NFD operator labels to orchestrate the image builds on top of nodes with specific features, by using the nodeSelector key. For example, only schedule container builds on worker nodes with amd64 architecture:
```yaml
nodeSelector:
node-role.kubernetes.io/worker: ""
beta.kubernetes.io/arch=amd64
```
Now with the BuildConfig created, you can check out the GitHub URL hook:
```bash
[eduardo@fedora-ws image_stream]$ oc describe bc/arch-dummy
Name: arch-dummy
Namespace: multiarch-build
Created: 5 days ago
Labels: app=arch-dummy
app.kubernetes.io/component=arch-dummy
app.kubernetes.io/instance=arch-dummy
app.kubernetes.io/part-of=arch-dummy-app
Annotations: app.openshift.io/vcs-ref=master
app.openshift.io/vcs-uri=https://github.com/ArangoGutierrez/Arch-Dummy
Latest Version: 2
Strategy: Docker
URL: https://github.com/ArangoGutierrez/Arch-Dummy
Ref: master
ContextDir: /
Dockerfile Path: build/Dockerfile
Output to: ImageStreamTag arch-dummy:latest
Build Run Policy: Serial
Triggered by: Config
Webhook Generic:
URL: https://api.4.z.y-ed-dev.blog-openshift.devcluster.openshift.com:6443/apis/build.openshift.io/v1/namespaces/multiarch-build/buildconfigs/arch-dummy/webhooks/<secret>/generic
AllowEnv: false
Webhook GitHub:
URL: https://api.4.z.y-ed-dev.blog-openshift.devcluster.openshift.com:6443/apis/build.openshift.io/v1/namespaces/multiarch-build/buildconfigs/arch-dummy/webhooks/<secret>/github
Builds History Limit:
Successful: 5
Failed: 5
Build Status Duration Creation Time
arch-dummy-1 complete 1m37s 2020-03-31 17:35:32 -0400 EDT
Events: <none>
```
With this URL, you can then follow GitHub Webhook instructions for a ready-to-work imageStream..
To learn more about OpenShift Builds and more advanced use cases, you can go here.
Deploy an Example
To test what you just learned today, you can create a buildConfig of Arch-Dummy as a didactic confirmation that the feature-specific build is working. To do so, deploy the image as detailed on https://learn.openshift.com/introduction/deploying-images/ by selecting the built image “arch-dummy:latest”.
This image was built from the repo https://github.com/ArangoGutierrez/Arch-Dummy as seen in the imageStream.yaml.
This application generates a small API service with three endpoints:
/ -> Will retrieve information about the app
/version -> Will retrieve information about the app binary and where it was built
Example:
/ -> Will retrieve information about the app
/version -> Will retrieve information about the app binary and where it was built
Example:
```json
{Git Commit:"6825a2f2a5b6a60278868260d8cdb51d192d9e63",CPU_arch:"Intel(R) Xeon(R) CPU E5-2686 v4 @",Built:"Tue Mar 31 21:43:15 UTC 2020",Go_version:"go1.12.8 linux/amd64"}
```
/cpu -> will retrieve information about the node on which the app is currently running
```json
{name:"Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GHz",model:"79",family:"6"}
```
This dummy arch app will allow you to test whether the image was built correctly and orchestrated correctly.
Conclusion
Building hardware-specific images is easy by leveraging internal OpenShift tooling like imageStreams and coupling with the Node-Feature-Discovery Operator to manage the detection of hardware features and configuration in the OpenShift cluster. OpenShift simplicity allows developers to define the nodeSelector key to orchestrate image builds over target hardware. These could prove to be of great use when you consider image-build processes that involve AI/ML training requiring GPU and other special resources.
Future Work
On this blog post, you saw a quick example on how to tie together the Node-Feature-Discovery Operator and Openshift imageStreams for simple hardware-specific image builds. The following post goes deeper into OpenShift replacing the imageStream build with OpenShift Pipelines and another operator, the Special-resource-operator, to build more complex images and deploy them in the cluster.