Like what you see? Have a play with our trial version.

Error rendering macro 'rw-search'

null

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

 

Overview

Yellowfin can be clustered on multiple servers to allow for high-availability and load-balancing. The function of load-balancing (multiplexing requests from external requests) can be achieved with a hardware load-balancer, or load-balancing software. This guide will outline the modifications required to Yellowfin's configuration to enable clustering, but not the external environment that directs incoming requests to particular nodes.

 It is required that the load-balancing infrastructure delivers network traffic to the Yellowfin application server transparently - as if the packets were connecting directly to the application server directly. If Session Replication is not enabled, the load-balancing infrastructure will need to provide “sticky-session” functionality. This is where traffic for a user’s session is sent to the same node for the duration of their session.

There are two components of a clustered environment. Application Messaging, which is required for the cluster to operate correctly, and Session Replication, which is optional.

 

Application Level Messaging

A Yellowfin node will communicate with other clustered nodes when the application needs to make changes that affect the entire cluster. This is usually to maintain the integrity of caches on remote nodes, but also facilitates functions such as user synchronisation and licence management. Application Level Messaging is configured by the ClusterManagement servlet in the web.xml file.

 

Container Level Session Replication

The Yellowfin application has also been written to allow user session information to be replicated onto multiple cluster nodes concurrently. User session information is the memory footprint that is stored on the server for each user session. Enabling Session Replication allows for the user’s session to continue, even if the cluster node they were connected to experiences a failure.

Without Session Replication, failure of a cluster node will destroy a user’s session, and they will need to log in again to another node.

Session Replication is a function of the Java Application Server, or other external infrastructure. An example of how this can be achieved with Tomcat will be discussed in this document.

 

 

Yellowfin Database Clustering

A Yellowfin Cluster must share a single Yellowfin Database instance. The database can be a single database instance, that is shared across all Yellowfin nodes, or a database that is clustered itself.  It is important that the Yellowfin database is scaled to handle database requests from all the nodes in the Cluster.

Database Clustering / Replication should be transparent to Yellowfin, as Yellowfin will have a single database URL for the entire cluster. Each node should be connecting to the same logical database, irrespective of how it is configured or clustered.

Licensing

The Yellowfin Licence file is stored in the Yellowfin database, and because of this, the Licence must contain a reference to all hostnames in the cluster. A clustered licence can be requested from Yellowfin.

 

 

Installing a Yellowfin Cluster

As a Yellowfin Cluster requires only one database instance, it means that the Yellowfin Installation process will differ slightly when installing a cluster.

The first Yellowfin Cluster node can be installed the same as a standalone instance of Yellowfin. The command-line or GUI installer can be used.

Additional Yellowfin Cluster nodes do not require the Yellowfin database.  There are several ways for installing additional nodes:

No Database Access Installation

After the initial node is installed, and the Yellowfin database is created, the Yellowfin installer can be used for the installation of subsequent nodes without affecting the initial database.

Use the Yellowfin Installer using the command:

java -jar yellowfin-20170928-full.jar action.nodbaccess=true

 

The action.nodbaccess=true option will run the installer as usual, and prompt for database credentials, but not create or alter the database during installation. This will only generate the filesystem component of Yellowfin. If Legacy Clustering is used, individual changes may need to be made to the web.xml file on each of the nodes.

File System Replication

As the filesystem structure on each node is the same, a new node can be created by duplicating the Yellowfin installation directory onto another computer. If Legacy Clustering is used, changes will need to be made to the web.xml file on each node.

Virtual Machine Replication

A Virtual Machine (or AWS AMI or Azure Compute image or Docker image) could be pre-configured with the Application Server component of Yellowfin. A copy of the Virtual Machine could be started for each application node in a cluster.

 

 

Yellowfin for Application Messaging

Each Yellowfin node must have the ClusterManagement servlet enabled for the node to become “Cluster Aware”. The ClusterManagement servlet is enabled by adding an additional configuration block to the web.xml file on each node.

Application Messaging is performed differently depending on the implementation mode. There are currently three modes available, REPOSITORY, DYNAMIC and LEGACY.

 

Multicast Cluster Messaging (DYNAMIC mode)

Yellowfin application messaging is handled by a multicast messaging library called JGroups. Using this method will automatically find other nodes in the cluster sharing the same Yellowfin database.

 The default configuration of JGroups uses UDP multicast messages to determine group membership and find new nodes. There may be environments where these types of messages cannot be sent. For example, Amazon does not allow multicast packets on its internal network between nodes. The Multicast Cluster Messaging adapter allows you to pass an XML configuration file to configure JGroups to use other methods for node discovery. This file can be referenced by passing the path to the BroadcastConfiguration servlet parameter within the ClusterManagement servlet.

 

The following servlet definition needs to be added to the web.xml on each node:

 

<!-- Cluster Management -->

<servlet>
      <servlet-name>ClusterManagement</servlet-name>
      <servlet-class>com.hof.mi.servlet.ClusterManagement</servlet-class>
      <init-param>
            <param-name>ClusterType</param-name>
            <param-value>DYNAMIC</param-value>
      </init-param>
      <init-param>
            <param-name>SerialiseWebserviceSessions</param-name>
            <param-value>true</param-value>
      </init-param>
      <init-param>
            <param-name>CheckSumRows</param-name>
            <param-value>true</param-value>
      </init-param>
      <init-param>
            <param-name>EncryptSessionId</param-name>
            <param-value>true</param-value>
      </init-param>
      <init-param>
            <param-name>EncryptSessionData</param-name>
            <param-value>true</param-value>
      </init-param>

<init-param>
            <param-name>AutoTaskDelegation</param-name>
            <param-value>true</param-value>
      </init-param>
      <load-on-startup>11</load-on-startup>
</servlet>

 

 

 

Multicast with Repository Discovery (REPOSITORY mode)

Repository Discovery is an implementation of DYNAMIC mode, but with a custom plugin for discovering nodes via the shared Yellowfin Repository. This can be useful for enabling clustering on environments where Multicast packets do not work.

This functionality can also be enabled with DYNAMIC mode with the RepositoryDiscovery servlet parameter set to true.

 

 

 

 

Web Service Cluster Messaging (LEGACY mode)

Yellowfin’s legacy cluster messaging is handled by AXIS web services. This requires that all nodes be defined at start-up, and that the service end-point, port, user and password be defined in each node’s web.xml file. Legacy mode does not allow cluster instances to reside on the same host.

 

 

 

 

 

 

 

  • No labels