Using JBoss Data Grid HotRod Client on OpenShift v3

Vijay Chintalapati bio photo By Vijay Chintalapati Comment

Using HotRod Client on OpenShift v3

In this post we will explore how to create a project in OpenShift that hosts both, a JDG cluster and a HotRod(HR) client application.

Instead of going with an easier setup of using a webapp on JBoss EAP containing the HR libraries, we will use a console based and discuss how such an app can be deployed on OpenShift using a feature called Source-to-Image

Source-to-Image for deploying Java applications

However small, a console based Java application has to run within a docker container to be able to run in OpenShift environment. A container needs an image hosted at some accessible image repository.

The Source-to-Image (s2i) is a process of doing just that. Each s2i process involves the use of a builder image particularly when the project is hosted externally on an accessible Git repository.

The builder image would :

  1. Checkout the Git repository as provided when creating a new app
  2. Build the repo on the correct branch, context directory and tag using the builder image
  3. Layer the resulting binary package on top of the base image defined in the builder image and using the process defined in certain well known s2i runtime files

The final image is then pushed to the Openshift registry and later used to spin up pods. For the purposes of this post, we will not go into detail as to how to put togther a builder image but instead we will see how to leverage an existing project to build the builder image.

Building the Builder Docker Image (Optional)

Follow the steps listed below to build a plain Java runtime builder image.

Step 1 : Checkout the fabric8io-images/s2i project from GitHub

git clone git@github.com:fabric8io-images/s2i.git

Step 2 : Build a new Docker image

# Navigate to the jboss folder of the s2i repo
cd s2i/java/images/jboss

# Run the docker build command to create a new image using the 
# Dockerfile in the folder. The tag used # is shown only as an 
# example
sudo docker build -t vchintal/s2i .

# As a seperate step, rename the saved docker image
sudo docker tag vchintal/s2i docker.io/vchintal/s2i

# Depending on your docker setup, you can run the following 
# command to push the image to docker hub. A prior login to
# docker hub is assumed here
sudo docker push docker.io/vchintal/s2i

Create a new Project on OpenShift v3

Log into the OpenShift environment

[root@intrepid origin]# oc login                                      
Authentication required for https://192.168.1.17:8443 (openshift)
Username: admin
Password:
Login successful.

Create a new project

oc new-project hotrod-demo \
   --display-name="HotRod Client Demo" \
   --description="Project to demonstrate how to connect to a remote JDG cluster via HotRod protocol"

Assign Correct Permissions to Service Accounts

Service accounts are API objects that exist within each project. For processes in containers inside pods to be able to contact the apiserver, they need to be associated with a service account.

The service account default should have a view permission to view the pods for successful clustering between JDG nodes.

oc policy add-role-to-user view \
   system:serviceaccount:$(oc project -q):default -n $(oc project -q)

Create a JDG app and scale to a cluster size of 3

# Using a pre-defined template as it starts a set of resources 
# default is the cache that will hold the entries created by the HotRod client 
oc new-app --template=datagrid65-basic -p CACHE_NAMES=default

# Scale to cluster size of three after ensuring the first single pod came 
# up correctly
oc scale --replicas=3 dc datagrid-app

Verify clustering between JDG pods

  1. Within the HotRod Client Demo project go to Browse → Pods
  2. Click on any of the JDG pods named like datagrid-app-x-xxxxx in Running status
  3. Staying under the Details tab, click on the Open Java Console
  4. In the JMX tree, choose jboss.infinispan → CacheManager → clustered → CacheManager and verify the Cluster size, it should be 3

Create and deploy a Java console app

# The git repository show below points to the project that 
# uses the HR client to connect to the JDG cluster
oc new-app vchintal/s2i-java~https://github.com/vchintal/openshift-hotrod-console-client.git

Once deployed the app immediately connects to the JDG cluster and writes 100 entries to the default cache that it is hardcoded to write to.

Take moment to look at the source code repository used. You will notice the two additional special folders and the files included that the s2i process needs:

  1. configuration
  2. .s2i

Test the cache puts by the HR client app

Just as shown how to verify the clustering of the JDG nodes, follow the steps 1 to 3 but for the last step navigate the JMX tree to : jboss.infinispan → Cache → default(dist_sync) → clustered → Statistics and verify the field Number of Entries. Since its a distributed cache with numOwners=2 and cluster size of 3, each node will appoximately carry (2 x 100) / 3 = 67 entries each.

References

  1. JDG xPaaS Image Documentation
comments powered by Disqus