Administering the Platform

Platform Administration
| TOC | 2
Contents
Administering the Platform............................................................................................................ 4
Jive Platform Overview.........................................................................................................................................4
System Components.................................................................................................................................. 4
List of Required Ports and Domains......................................................................................................... 5
Platform Conventions, Management and Tools........................................................................................ 7
What Happened to jiveHome?.................................................................................................................. 9
Jive Security........................................................................................................................................................ 10
Security and Internal Development Processes........................................................................................ 10
In-product Security Features................................................................................................................... 11
Jive and Cookies......................................................................................................................................14
Other Cookies..........................................................................................................................................17
Security of Public Cloud Communities...................................................................................................17
Security of Private Cloud Communities..................................................................................................18
Security of Cloud-Delivered Services.....................................................................................................18
Security Recommendations.....................................................................................................................19
Security FAQ...........................................................................................................................................21
Platform Run Book (Linux)................................................................................................................................ 23
Jive HTTPD.............................................................................................................................................23
Jive HTTPD Networking.........................................................................................................................24
Jive HTTPD Log Files............................................................................................................................ 24
Jive-Managed Applications.....................................................................................................................25
Jive Database Server............................................................................................................................... 27
Operations Cookbook..........................................................................................................................................29
Enabling SSL Encryption........................................................................................................................29
Configuring a Persistent Session Manager..............................................................................................29
Forcing Traffic to HTTPS....................................................................................................................... 30
Restricting Admin Access by IP............................................................................................................. 30
Disabling the Local Jive System Database..............................................................................................30
Changing the Configuration of an Existing Instance.............................................................................. 31
Performing a Jive System Database Backup...........................................................................................32
Performing Database Maintenance......................................................................................................... 32
Backup and Storage Considerations........................................................................................................32
Using an External Load Balancer............................................................................................................34
Enable Application Debugger Support....................................................................................................34
Setting Up Document Conversion...........................................................................................................34
Configuration Support for External Client Access..................................................................................35
Adding Fonts to Support Office Document Preview.............................................................................. 36
Monitoring Your Jive Environment.................................................................................................................... 36
Basic Monitoring Recommendations...................................................................................................... 36
Advanced Monitoring Recommendations...............................................................................................40
Fine-Tuning Performance....................................................................................................................................41
Client-Side Resource Caching.................................................................................................................41
Server-Side Page Caching....................................................................................................................... 42
Configuring External Static Resource Caching.......................................................................................42
Adjusting the Java Virtual Machine (JVM) Settings.............................................................................. 43
Performance Tuning Tips........................................................................................................................45
Search Index Rebuilding......................................................................................................................... 45
Using a Content Distribution Tool with Jive.......................................................................................... 46
Clustering in Jive.................................................................................................................................................48
Clustering Best Practices.........................................................................................................................49
Clustering FAQ....................................................................................................................................... 49
| TOC | 3
Managing an Application Cluster............................................................................................................50
In-Memory Caching............................................................................................................................................ 51
Parts of the In-Memory Caching System................................................................................................ 51
How In-Memory Caching Works............................................................................................................51
Cache Server Deployment Design.......................................................................................................... 52
Choosing the Number of Cache Server Machines.................................................................................. 52
Adjusting Cache-Related Memory..........................................................................................................53
Managing In-Memory Cache Servers..................................................................................................... 53
Configuring In-Memory Caches............................................................................................................. 54
Clustering and In-Memory Caching: Rationale and Design for Changing Models................................ 55
Troubleshooting Caching and Clustering................................................................................................58
Application Management Command Reference................................................................................................. 59
appadd Command....................................................................................................................................60
appls Command.......................................................................................................................................62
apprestart Command................................................................................................................................62
apprm Command..................................................................................................................................... 62
appsnap Command.................................................................................................................................. 63
appstart Command...................................................................................................................................63
appstop Command...................................................................................................................................64
appsupport Command..............................................................................................................................64
dbbackup Command................................................................................................................................65
manage Command...................................................................................................................................65
upgrade Command.................................................................................................................................. 66
| Administering the Platform | 4
Administering the Platform
Learn how to administer and manage the platform. This section includes run books, configuration information, and
performance tuning help.
Jive Platform Overview
The platform introduced in version 3 includes important changes that affect the way you install and administer the
application. This topic provides an overview of those changes, the platform, and the tools that come with it.
Through a design that supports a more focused set of components (in particular, application server and database
servers), the platform provides a standard, reliable, and better-integrated environment that makes it easier to optimize
the applications deployed on it. The platform -- especially its deployment as an package -- also makes installation
much easier by removing the need to follow instructions (including component-specific issues and limitations)
specific to particular combinations of application server, DBMS, and so on.
Note: If you're upgrading, Jive Software provides tools for migrating your existing deployment. Be sure to
see the Installation Guide for more on Migrating Data from an Unsupported DBMS and the contents of your
jiveHome directory.
Here's a high-level list of the things that are new or changed with the platform's introduction:
•
•
•
•
•
•
System Components were chosen to create an integrated, standard platform that can be more fully optimized.
Database server support was tuned to the PostgreSQL and Oracle DBMSes.
Application server support is optimized to the Apache Tomcat instance included with the platform. Other
application servers aren't supported.
Supported operating systems include Red Hat and SuSE Enterprise Linux.
Tools were included to make managing the application easier and more reliable.
The jiveHome directory became the jive.instance.home environment variable.
Related Content
You might be interested in other documentation related to the platform. The following topics include references and
guides for making the most of the platform and applications installed on it.
Document
Description
Platform Run Book (Linux)
Basic system commands for managing the platform. If you need to handle
something quickly, start here.
Operations Cookbook
Sample configurations and script examples for long-term operations.
Application Management
Commands
A reference for commands you can use to maintain your managed instance.
System Requirements
System components and technologies supported in this release.
Installing
Step by step instructions for installing the platform.
Upgrading
Step by step instructions for upgrading the platform.
System Components
The Jive platform consists of several high-level components that work together to provide overall system
functionality. The following sections provide an overview of these components and their role in the architecture. By
default, the platform will install the following components to start and stop in the correct order during system startup
and shutdown.
| Administering the Platform | 5
Apache HTTPD Server
The platform contains a customized version of the Apache Foundation’s Apache HTTPD web server software.
Among other things, this software is used to process HTTP and HTTPS-encrypted requests between the platform
and end users. The Apache HTTPD server uses the JK protocol to communicate with the back-end application server
described in the next section.
Tomcat Application Server
The Apache Tomcat application server is used to host back-end application and business logic as well as to generate
dynamic HTTP traffic served by the Apache HTTPD Server. The application server is also responsible for processing
email delivered by the system, and for scheduling background tasks such as search index management and integration
with third-party applications and systems.
PostgreSQL Database Server
The PostgreSQL database server is an RDBMS (Relational Database Management Server) service that is hosted
internally within the platform. You can use this service or a PostgreSQL or Oracle system of your own.
Important: The pre-packaged PostgreSQL DBMS is for evaluation purposes and should not be used for
production instances.
Jive System Configuration
As part of the platform installation, the Jive package will tune various settings such as shared memory and maximum
file handles. The details of these changes can be found in the “jive-system” script, located in the standard “/etc/init.d”
directory.
List of Required Ports and Domains
The following tables show the ports you need to open in order to enable key Jive components. For services that are
hosted by Jive, you also need to ensure access to certain domains or IP addresses. These addresses are shown where
required.
Jive Core Components
Component
Port(s)
Jive - Tomcat
•
•
•
•
Direction
Domain(s) or IP(s)
Open
HTTP monitor port: 9000
(does not need to be opened
outside)
Server port: 9100 (does not
need to be opened outside)
HTTP port: 9200 (does not
need to be opened outside)
JMS port: 61616 (required
for DocVerse, but needed on
only one node)
Jive - Apache
HTTP port: 80
Open and
public-facing
Jive Genius Recommender
Service (Activity Engine)
TCP port: 7020
Open
JMX port: 7021
in.genius.jivesoftware.com,
out.genius.jivesoftware.com
443
Analytics
none
Inbound
Jive Apps Market
none
n/a
market.apps.jivesoftware.com,
gateway.jivesoftware.com,
| Administering the Platform | 6
Component
Port(s)
Direction
Domain(s) or IP(s)
apphosting.jivesoftware.com,
developers.jivesoftware.com
Licensing
443
Outbound
https://license.jivesbs.com
Cluster Node Communication
•
•
•
Do not put a firewall between your cache servers and your Jive application servers. If you do so, caching will not
work. A firewall is unnecessary because your application servers will not be sending untrusted communications
to the cache servers, or vice versa. There should be nothing that might slow down communication between the
application servers and the cache servers.
All ports between the cache and web application servers must be open.
Port 6650 should be blocked to external access (but not between the cluster nodes!) so that any access outside
of the datacenter is disallowed. This is to prevent operations allowed by JMX from being executed on the cache
server.
Databases and Database Servers
Component
Port(s)
Direction
Database - PostgreSQL
5432
Open/bidirectional
Database server - SQL
1433
Open/bidirectional
Database server - MySQL
3306
Open/bidirectional
Database server - Oracle
1521
Open/bidirectional
Jive Plugins (all optional)
Component
Port(s)
Document Conversion
•
•
•
•
Direction
Domain(s) or IP(s)
Open
Jive Document
Conversion Daemon
(JDCD): 8200
Jive OpenOffice
Daemon: 8850 (older
versions) or 8300 (newer
versions)
OpenOffice: 8820, 8821,
8822, 8823, 8824
JDCD callback: 80
Jive Connects for Office
80 or 443 (we recommend
using 443 to transmit
content)
Open
Jive for SharePoint
443
Inbound to public SSLenabled VIP for Jive
instance
Mobile (all locations,
including EMEA)
443 if you use SSL, 80 if not Inbound to Jive instance
out.jive-mobile.com
(204.93.64.112)
Mobile (all locations,
including EMEA)
443
gateway.jive-mobile.com
(204.93.64.255 and
204.93.64.252)
Outbound from Jive
instance
| Administering the Platform | 7
Component
Port(s)
Direction
Domain(s) or IP(s)
Mobile (EMEA only)
443 if you use SSL, 80 if not Inbound to Jive instance
204.93.80.122
Jive Present
443
Inbound to Jive instance
188.93.102.111,
188.93.102.112, and
188.93.102.115
Openfire
9090 and/or 9091
Open
Video
1935 (RTMP) (if this is not
available, then use port 80)
(RTMPT)
Open/outbound
80 and 443 to a
specific IP range (for
example, 208.122.31.1 208.122.31.250)
Open/inbound
208.122.47.241
208.122.47.242
208.122.47.243
74.63.51.50
74.63.51.55
Platform Conventions, Management and Tools
The "jive" User
By default, Jive will attempt to install a local system user named “jive” belonging to group “jive” with a home
directory of “/usr/local/jive”. If a user named jive already exists on the system where the installation is being
performed, the platform will use the existing user. Most binaries, scripts and configurations used by the platform are
owned by the jive user with a small number of files owned by root.
Application Abstractions
The Jive platform is designed to manage one or more logical “application” servers, each serving a variety of
functional concerns.
Master Application Template
The master application template is the core set of Jive software used to create all subsequent instances. By
convention, the files in the master template are stored in the jive user’s “applications” directory located at:
/usr/local/jive/applications/template
Application Instances
Upon installation, and at the request of system administrators, the platform will create instances of the master
application template, each with a unique name. Each instance is managed, configured and runs independently of
other applications. By design, an instance of an application runs in a dedicated operating system process, providing
separation between applications such that one application instance cannot directly exhaust the resources of another in
a way that cannot be stopped with standard operating system commands (such as kill).
Managed Application Instances
An application instance is considered managed if it is created, updated, and its lifecycle controlled by the platform
tooling (appadd, appstop, appstart, etc.). Managed applications are automatically upgraded when the platform is
upgraded.
In some circumstances, you might want to create applications that are not fully managed. For example, applications
created with the “—no-link” options to the appadd tool will not be directly upgraded by platform upgrades. Nonmanaged applications are not directly upgraded by the Jive tooling and must be upgraded manually by a system
administrator.
| Administering the Platform | 8
System Management Tools
Ultimately, most Jive tools delegate to standard Linux systems management tools such as bash scripts, the “ps”
command, and so on. It is possible to monitor Jive-managed components using standard tools such as ps and top as
described in the Platform Run Book (Linux).
Database Management Tools
When using the local database (installed as part of Jive) as the system of record for a deployment, the platform will
make an attempt to tune and back up the underlying database. While sufficient for smaller databases, larger databases
will have storage and backup considerations unique to their individual deployments; you'll likely want to alter the
platform defaults.
Auto Backups
Upon installation, the Jive platform database is scheduled for daily full backups via an entry in the cron.daily section
of the system cron task scheduler. Specifically, the installation process creates a symlink that maps /usr/local/jive/bin/
dbmaint to /etc/cron.daily/jive-dbmaint. This script performs a full analysis and backup of the contents of the local
database. Backups are stored to /usr/local/jive/var/data/backup/postgres/full and are stamped with the date that the
backup began. Logs for the backup and maintenance are stored in the standard platform location at /usr/local/jive/var/
logs, in files dbmaint.log, dbback.log and dbanalyze.log.
In addition to full backups, the platform captures PostgreSQL WAL (Write-Ahead Log) segments to /usr/local/jive/
var/data/backup/postgres/wal. For more information about WAL segments, see the PostgreSQL documentation.
Important: The pre-packaged PostgreSQL DBMS is for evaluation purposes and should not be used for
production instances.
Database Storage Considerations
The platform backup infrastructure will purge full backups that are 30 days or more old. The platform will not purge
WAL segment backups described in the previous section. As a general rule, you can safely remove WAL segments
older than the most recent full backup. You should keep WAL segments since the previous full backup so that you
have it for a partial recovery of the database in the event of corruption.
System administrators should evaluate the storage capacity of their systems and the size consumption to arrive at the
best strategy for purging backup files.
Application Management
The platform manages one or more instances of the Jive software. Each instance represents a dedicated Apache
Tomcat application server that ultimately runs as a dedicated server process, independent of other application
servers on the host machine. By convention, each application instance is represented by a directory entry beneath the
“applications” directory located in the jive user’s home directory (/usr/local/jive/). For example, an application named
“biodome” will have a corresponding application directory located at:
/usr/local/jive/applications/biodome
Each application has a standard set of subdirectories beneath its top-level application directory:
•
<app_name>/bin – Scripts and environment configuration used by this instance. Notably:
•
•
manage – Used to start, stop and restart the application server instance in a graceful manner and intended to
minimize the possibility of data loss.
• setenv – Script for failsafe environment configuration invoked by the manage script. This script should not be
edited except under the direction of Jive support.
• instance – Per-instance configuration information. The contents of this script customize the environment for
the named application instance. For example, to change the HTTP address the application server listens on, the
platform will write shell script environment variables to this file.
<app_name>/conf – Application server runtime configuration files as well as container logging configuration
(log4j.properties)
| Administering the Platform | 9
•
•
<app_name>/home – Application instance home directory. This directory represents the instance-specific storage
for items such as search indexes, local file system caches, plugin caches, and database configuration.
<app_name>/application – Commonly a symbolic link to the shared Jive application binaries. If an application
instance is created by the appadd tool with the “–no-link” option, the application directory will be a full copy of
the shared application binaries at the time the application was created and will not be upgraded during a package
upgrade.
Troubleshooting and Snapshot Utilities
The Jive platform installation captures meaningful system data to various log files as well as providing application
and system snapshot capabilities. With these, Jive support can more easily diagnose issues with the platform.
Support Information Gathering
The primary mechanism for gathering support-related information on the platform is via the appsupport command,
typically executed as the jive user.
When run, the appsupport command will combine multiple useful data points from the system and application logs,
then aggregate the data to the console’s standard output stream. Alternatively, you can use the “-o” flag, along with
the path to a file where the aggregate system information should be appended (if the file does not exist, it will be
created).
System Thread Dumps
In some cases, Jive support may suggest that you capture application thread dumps from a running system. The
platform includes the appsnap tool for this purpose. As with other commands, appsnap is commonly performed as the
jive user.
The most common options used with the appsnap tool are the "—count" and "—interval" options. Count determines
the number of samples taken. The interval option determines the number of seconds between successful samples. The
tool writes snapshot output to the console’s standard output or appends it to the file given for the "-o" option.
What Happened to jiveHome?
If you're upgrading from a version prior to version 3, you might wonder about the jiveHome directory. The jiveHome
directory was where you put (and found) the working files for your community. Caches, themes, plugins, logs -- that
sort of thing.
As of version 3, the jiveHome directory is replaced by the jive.instance.home environment variable. Each instance
depends on three environment variables to identify its place in the bigger picture of its environment:
•
•
jive.home -- The root of all Jive software on the filesystem. Shared resources like Apache and Tomcat binaries are
found here, as are individual instances.
jive.name -- Unique, human-readable identifier of an instance within the local deployment (and importantly, not
across all deployments); this value influences where the instance lives in a managed deployment (i.e., platform)
and various other subtle artifacts like log file names.
jive.instance.home -- Previously jiveHome. This is where the application puts working files, including caches,
plugins, themes, etc. For example, by default this would be /usr/local/jive/applications/sbs/home/.
•
When you migrate your application to the platform, the contents of the jiveHome directory are distrbuted to places
that are better suited to the platform. The following table list the new locations of resources previous found in
jiveHome.
Important: The pre-packaged PostgreSQL DBMS is for evaluation purposes and should not be used for
production instances.
Resource
Location Prior to Version 3
Application logs (including those <jiveHome>/logs
for database backups)
Location in the Platform
/usr/local/jive/var/logs
| Administering the Platform | 10
Resource
Location Prior to Version 3
Location in the Platform
Installed plugins
<jiveHome>/plugins
/usr/local/jive/applications/<app_name>/
home/plugins
User-created themes
<jiveHome>/themes
/usr/local/jive/applications/<app_name>/
home/themes
Cached attachments
<jiveHome>/attachments
/usr/local/jive/applications/<app_name>/
home/attachments
Cached images
<jiveHome>/images
/usr/local/jive/applications/<app_name>/
home/images
Cached data
<jiveHome>/cache
/usr/local/jive/applications/<app_name>/
home/caches
Resources directory
<jiveHome>/resources
/usr/local/jive/applications/<app_name>/
home/resources
Local system database
<jiveHome>/database
/usr/local/jive/var/work/sbs/data/postgres-8.3
jive_startup.xml file
<jiveHome>/jive_startup.xml
/usr/local/jive/applications/<app_name>/
home/jive_startup.xml
Setting jive.instance.home
Jive uses the following convention to determine where jive.instance.home actually exists on the file system. These
rules follow the general notion of more to less specific -- for example, if a user has gone to the trouble of specifying a
JNDI property for the home directory, honor that over a more generic system property, honor that over a more generic
environment property.
1.
2.
3.
4.
5.
JNDI environment property named "jive.instance.home" in the context "java:comp/env/"
System property named "jive.instance.home"
JNDI legacy environment property named "jiveHome" in the same context as #1
System property named "jiveHome"
Default to OS default (/usr/local/jive/applications/${name}/home or c:\jive\applications\${name}\home} ${name} is described above represents the -Djive.name property with a default of "sbs"
Jive Security
Deployed inside or outside firewalls, Jive's security and privacy features are designed to meet the requirements of the
most tightly regulated global industries and government agencies. Jive Software continually evaluates the security of
current and past product versions to protect your organization's data.
Jive Software makes every effort to protect the confidentiality, integrity, and availability of its public and private
cloud communities, as well as its cloud-delivered services.
Jive Software's public cloud platform, private cloud platform, and cloud-delivered services use the same core data
center architecture and security model.
Note: This security section describes only the Jive application. It does not describe the security features of
third-party products.
Security and Internal Development Processes
Throughout the product development lifecycle, application security is continuously tested and improved.
Jive Software audits all new feature designs for high-level security considerations. Implementations of these features
are validated for potential security issues throughout the development phase. Existing features are audited for security
| Administering the Platform | 11
vulnerability regressions. Application-wide audits are performed to ensure that feature integration is secure. Thirdparty components used by Jive are researched and tracked over time for vulnerabilities and license compliance.
Development includes the following security checks:
•
•
•
•
Source code reviews - if you'd like to see screenshots from our source code review tool, contact your Jive
Software representative.
Automated penetration testing - each release of the application is tested with IBM's state-of-the-art security
product, AppScan. In addition, we offer AppScan test results from your instance. Contact your Jive Software
representative about this service.
Vulnerability management - Jive Software relies on its own documented release procedure to manage
vulnerabilities, which includes a timeline for fixing issues, communicating them to customers, and providing
patches.
Third-party audits - Jive Software performs annual audits across the core product for each hosted service,
including black box and white box analyses. If you would like to see these reports, please contact your Jive
Software representative.
In-product Security Features
Several built-in security features allow you to configure your Jive community for the appropriate level of security for
your organization.
Authentication Features
Login Security
Using the Admin Console, you can configure Jive
to strongly discourage automated (computer-driven)
registration and logins. Automated registration is usually
an attempt to gain access to the application for malicious
reasons. By taking steps to make registering and logging
in something that only a human being can do, you help
to prevent automated attacks. We recommend using the
following tools, all of which are available as options in
the Admin Console:
•
•
Login Throttling: Enabling login throttling slows
down the login process when a user has entered
incorrect credentials more than the specified number
of times. For example, if you set the number of failed
attempts to 5 and a forced delay to 10 seconds and
a user fails to log in after more than 5 attempts, the
application would force the user to wait 10 seconds
before being able to try again.
Login Captcha: Enabling login Captcha will display a
Captcha image on the login page. The image displays
text (distorted to prevent spam registration) that
the user must enter to continue with registration.
This discourages registration by other computers
to send spam messages. The login Captcha setting
is designed to display the Captcha image when
throttling begins. In other words, after the number of
failed attempts specified for throttling, the Captcha
image is displayed and throttling begins. You cannot
enable the login Captcha unless login throttling is
enabled. The Captcha size is the number of characters
that appear in the Captcha image, and which the user
must type when logging in. A good value for this is 6,
which is long enough to make the image useful, but
short enough to make it easy for real humans.
| Administering the Platform | 12
•
Password Strength: You can choose to enforce strong
passwords via the Admin Console. The following
options are available out of the box:
•
•
•
•
a minimum of 6 characters of any type
a minimum of 7 characters including 2 different
character types (uppercase, lowercase, number,
punctuation, and/or special characters)
a minimum of 7 characters including 3 different
character types
a minimum of 8 characters, including all 4
character types
To learn more about configuring login and password
security, see Configuring Login Security and Configuring
User Registration.
Email Validation
You can configure Jive to require email validation for
all new accounts. This setting helps to prevent bots from
registering with the site and then automatically posting
content. When you configure email validation, Jive will
require a new user to complete the registration form and
retrieve an email with a click-through link to validate
their registration. To learn how to enable this setting, see
Configuring User Registration.
Account Lockout
Jive does not offer account lockout as an out-of-the-box
feature. However, you can configure Jive to authenticate
against a thirty-party IDP that will perform account
lockout. If this is not something you want to implement,
you can request the account lockout feature from Jive's
Professional Services team as a customization.
SSO
As of Jive 5.0, the application includes support for
SAML out-of-the-box and can also be implemented as
a customization from Jive's Professional Services team,
a Jive partner, or an engineer of your choice. Be sure to
read Getting Ready to Implement SAML SSO.
Delegated Authentication
When delegated authentication is enabled and
configured, Jive makes a simple Web Service call out
to the configured server whenever a user attempts to log
in. This allows administrators to control the definition
of users outside of the community. To learn more about
this, see Configuring Delegated Authentication.
Authorization Features
Jive includes powerful built-in end user and admin permissions matrices, as well as customizable permissions.
Depending on the assigned role, users can see or not see specific places and the content posted there. In addition,
administrative permissions can be used to limit the access level of administrators. Jive administrators control user
and admin permissions through the Admin Console. To learn more about how permissions work, see Managing
Permissions.
Moderation and Abuse Features
Moderation
Jive administrators can enable moderation so that
designated reviewers view and approve content before
| Administering the Platform | 13
it is published in the community. This can be useful for
places that contain sensitive information. In addition
to content moderation, administrators can enable
moderation for images, profile images, avatars, and
user registrations. For more about moderation, see
Moderating Content.
Abuse Reporting
Administrators can enable abuse reporting so that users
can report abusive content items. To learn more about
abuse reporting, see Setting Up Abuse Reporting.
Banning Users
Administrators can block a person's access to Jive so that
they are no longer able to log in to the community. For
example, if someone becomes abusive in their messages
(or moderating their content is too time-consuming),
administrators may choose to ensure that the user can no
longer log in. Users can be banned through their login
credentials or their IP address. Be sure to read Banning
People for more information.
Interceptors
Interceptors can be set up to perform customizable
actions on incoming requests that seek to post content.
Administrators can set up interceptors to prevent specific
users from posting content or to filter and moderate
offensive words, anything from specific IP addresses,
or the posting frequency of specific users. To learn
more about how interceptors work, see Configuring
Interceptors.
Encryption
HTTPS and Browser Encryption
You can configure Jive to encrypt HTTP requests using
SSL. Documentation and instructions for configuring
SSL is available in Enabling SSL Encryption.
Additionally, you can configure Jive to use three
different HTTP/HTTPS configurations:
•
•
•
Data Encryption
Allow both HTTP and HTTPS
Force HTTPS
Force HTTPS on secure pages (login, registration,
change password, etc.)
Jive supports anything the JVM does at the application
level and anything OpenSSL does at the HTTPD level.
We actively use Blowfish/ECB/PKCS5P, AES-256
for symmetric key encryption, SHA-256 for oneway hashing, and we support and recommend TripleDES ciphers at the HTTPD server for TLS encrypted
channels.
•
•
SHA-256 -- Jive user passwords are stored in the
database as salted SHA-256 hashes.
AES-256 -- Bridging credentials, License Metering
information, and iPhone UDIDs are all encrypted
with AES-256.
| Administering the Platform | 14
•
Blowfish/ECB/PKCS5Padding -- Used for storing
LDAP administrator credentials and OpenSearch
credentials in the database.
Cookies
Jive uses HTTP cookies in several places in the application to provide a better user experience. To learn more about
how the application uses cookies, be sure to read Jive and Cookies.
Note: The Jive Professional Services team can deliver security customizations if the out-of-the-box security
features do not meet the specific requirements of your organization.
Jive and Cookies
Jive uses HTTP cookies in several places in the application to provide a better user experience.
Jive does not set third-party cookies as part of the core product offering; however, it is possible for you to configure
the application so that third-party cookies are set. For example, you can configure the application to use a Webtracking tool such as Google Analytics or Webtrends, each of whom may set a third-party cookie.
Jive does not set the "domain" attribute of an HTTP cookie.
Starting with Jive version 4.5.7, all Jive cookies that are set by the server (not via the client or browser) have the
HttpOnly flag.
Setting Up Secure Cookies
Out of the box, Jive is not configured to set the "secure" attribute for cookies that should only be sent via HTTPS
connections. You can configure Jive to send only allowed, secure cookies through the following process:
1. Set the Jive system property "jive.cookies.secure" to the value "true". This results in all Jive-specific cookies
(not including JSESSIONID) having the "secure" attribute set on the cookie (Admin Console: System >
Management > System Properties).
2. Configure both Apache and Tomcat to only allow HTTPS connections. To understand these configurations, see
Forcing Traffic to HTTPS and Enabling SSL Encryption.
3. Finally, configure Tomcat with the "secure" attribute set to "true" in the server.xml configuration file,
specifically the "server/connector" element.
How Jive Sets Cookies
Jive performs an audit by searching for all instances of "setCookie" in the source code, and then sets the following
cookies.
Note: Except where noted, all of the following cookies do not contain user-identifiable information. This
behavior meets European Union privacy laws.
SPRING_SECURITY_REMEMBER_ME_COOKIE
This cookie is used on the front-end as part of the
security authentication process to denote whether or not
the user wants to have their credentials persist across
sessions. It is part of the Spring Security specification;
details are available here.
•
•
•
Possible values: string, the Base64 encoded username
and expiration time combined with an MD5 hex
hash of the username, password, expiration time, and
private key.
Expiration: defaults to 14 days.
Encryption: none. This is an MD5 hex hash.
| Administering the Platform | 15
•
JSESSIONID
This cookie is used on the front-end and the Admin
Console to identify a session. It is part of the Java Servlet
specification.
•
•
•
•
jive.server.info
•
•
•
•
•
•
•
•
Possible values: integer, the height in pixels of the
editor after the user chooses to expand the editor
window.
Expiration: one year.
Example: jive_wysiwygtext_height="500"
This cookie is used on the front-end for guest/anonymous
users who choose to use an editor mode other than the
default editor mode.
•
•
•
•
clickedFolder
Possible values: string, true if the current request
originates from a browser where the user is logged in.
Expiration: at session end.
Encryption: none.
Example: jive.user.loggedin="true"
This cookie is used on the front-end to persist the height
of the editor window across sessions.
•
jive_default_editor_mode
Possible values: string, a combination of the
serverName, serverPort, contextPath, localName,
localPort, and localAddr.
Expiration: at session end.
Encryption: none.
Example:
jive.server.info="serverName=community.example.com:serverPort=4
This cookie is used on the front-end in combination with
Content Distribution Networks (CDN) to denote the
status of the current request.
•
jive_wysiwygtext_height
Possible values: string, the unique token generated by
Apache Tomcat.
Expiration: at session end.
Encryption: none.
Example:
JSESSIONID="1315409220832msB9E3A98AA1F2005E61FA97596
This cookie is used on the front-end in combination with
Content Distribution Networks (CDN) like Akamai to
associate the user with a specific server (also known as
"session affinity").
•
jive.user.loggedin
Example:
SPRING_SECURITY_REMEMBER_ME_COOKIE="YWFyb246M
Possible values: string, advanced.
Expiration: 30 days.
Encryption: none.
Example: jive_default_editor_mode="advanced"
This cookie is used in the Admin Console to persist the
open/closed status of the current folder as used in various
tree-view portions of the Admin Console.
| Administering the Platform | 16
•
•
•
•
highlightedTreeviewLink
This cookie is used in the Admin Console to persist the
current folder as used in various tree-view portions of the
Admin Console.
•
•
•
•
jiveLocale
•
•
•
Possible values: string, Base64 encoded, encrypted
username/password of remote site.
Expiration: at session end.
Encryption: yes.
Example: jivecookie="YWFyb246MTMxNTU4MjUzNTI3MDoyZDUyODNmZjh
This cookie is used on the front-end to store the last time
the user visited the site.
•
•
•
•
JCAPI-Token
Possible values: string, timezone ID.
Expiration: 30 days.
Example: jiveTimeZoneID="234"
This cookie is used in the Admin Console to temporarily
persist an encrypted username/password when creating a
bridge between two sites. The information in the cookie
is first encrypted with AES/256 encryption and then
Base64 encoded.
•
jive.user.lastvisited
Possible values: string, locale code.
Expiration: 30 days.
Encryption: none.
Example: jiveLocale="en_US"
This cookie is used on the front-end for guest/anonymous
users who choose a timezone setting.
•
•
•
jive-cookie
Possible values: integer, the DOM ID of the clicked
folder.
Expiration: at session end.
Encryption: none.
Example: highlightedTreeviewLink="23"
This cookie is used on the front-end for guest/anonymous
users who choose a locale setting.
•
•
•
•
jiveTimeZoneID
Possible values: string, true, or false.
Expiration: at session end.
Encryption: none.
Example: clickedFolder="true"
Possible values: long, value in milliseconds that
represents the time of the login.
Expiration: 30 days.
Encryption: none.
Example: jive.user.lastvisited="1315292400000"
This cookie is used to avoid CSRF attacks when the UI
layer uses session-based authentication.
| Administering the Platform | 17
Other Cookies
Other services attached to your instance may set their own cookies. You'll need to refer to their documentation for
more information about these cookies.
Google Analytics
The following Google Analytics cookies may set:
•
•
•
•
•
__utma
__utmb
__utmc
__utmv
__utmz
F5 Load Balancer
The F5 load balancer used by Jive Software sets the following cookies on hosted and Cloud instances. Note that the
cookie names are dynamic, based on the name of F5's Big IP pool. Yours may be named differently. You can read
more about how F5 uses cookies here.
•
•
BCSI-CS-f89c73c53b0f638a
BIGipServerm2s4c5-3-pool
Security of Public Cloud Communities
The application's secure data architecture and Safe Harbor certification ensures maximum security and privacy of
your public cloud (hosted) instance.
Secure Data Architecture
Virtualization technology and multi-tenancy architectures at the security, storage, and network layers ensure
separation between each public cloud instance and SaaS-delivered feature. Jive Software follows several industry
best practices to harden all cloud operating systems and databases that support all of the layers of the platform. All
hosts use security-hardened Linux distributions with non-default software configurations and minimal processes, user
accounts, and open network ports. Jive Software's cloud engineers never execute at root and all log activity is stored
remotely as an additonal security precaution.
Jive Software's public cloud instances and SaaS services hosts include various encryption methods to protect data
transmission over untrusted networks. We use SSL or HTTPS for all public cloud instances. Additionally, Jive
Software has implemented encryption for both the data transmission and storage of offsite backups in our remote data
center(s).
We use a variety of automatic tools and manual techniques to ensure our environment is secure.
Secure Data Center
All of Jive Software's hosting data centers have central surveillance monitoring, key cards for initial access, and other
mechanisms such as biometrics or two-factor authentication systems to ensure that only authorized personnel have
access to the physical machines. Only authorized data center personnel are granted access credentials to our data
centers. No one can enter the production area of the data center without prior clearance and an authorized escort. All
Jive Software office locations adhere to similar security controls to limit access to active employees only.
Jive’s network infrastructure was designed to eliminate single points of failure. Each data center leverages multiple
internet feeds from multiple providers, ensuring that in the event of a carrier outage, our public cloud (hosted) sites
and SaaS-delivered services are still available. The switching and routing layers within our data centers are designed
so that device or network link failure does not impact service, and ensures that public cloud customers have the
highest level of availability.
Jive Software's public cloud and SaaS-delivered services are backed up regularly to guard against data loss. All
instances are backed up to an offsite location using an electronic backup and recovery system. All backed-up
| Administering the Platform | 18
information is transmitted and stored in an encrypted format. The Jive Software public cloud team performs failure
and redundancy testing during the implementation phase and as needed throughout the equipment lifecycle.
Jive Software is Safe Harbor and TRUSTe certified and our data centers are either SAS 70, SSAE 16 SOC2, or ISO
27001 certified, depending on the location.
Public Cloud FAQ
For more questions and answers about our public cloud platform, see Public Cloud FAQ in the Jive Community. You
will need to be a registered user of the community to view this document.
Security of Private Cloud Communities
Our private cloud offering provides you with all of the features and security of the public cloud product, but deployed
behind your firewall.
Here's how it works (click the image to enlarge it):
Security of Cloud-Delivered Services
Some of the application's services are cloud-delivered to your instance and updated regularly via our private network.
Here's how it works (click the image to enlarge it):
| Administering the Platform | 19
Web Services
Web services requests fully respect the configured
permissions of Jive. As such, if web services are fully
enabled for all users, there is no risk of private content
exposure or invalid content manipulation. Users will
only be able to view and modify content they have access
to in the application. However, web services do give
users a mechanism for creating and downloading content
en masse. If you do not want users to have the ability
to quickly download all of the content they are able to
access, consider enabling web services only for specific
users. To learn how to do this, see Setting Access for Web
Service Clients.
Video
Video Plugin Security
Mobile
Mobile Security
Jive Apps Market
Jive Apps Security
Jive Genius and Recommender Service
Jive Genius Security
Security Recommendations
These security recommendations depend on your community's specific configuration.
Caution: Each community can be configured differently. Because of this, not all of these recommendations
apply to all communities. If you have any questions about these recommendations, please contact your Jive
Software representative.
Internal communities are typically for employees only.
External communities are typically for customers, vendors, and other external audiences.
| Administering the Platform | 20
Security Recommendation: Applies to:
Description:
Configure user login security External Communities
Login security can include throttling, Captcha, and
password strength requirements.
For implementation details:
See Configuring Login Security and Configuring User
Registration.
Enable SSO
Internal Communities
A single-sign on solution can help you provide a consistent
login experience for your users while providing identity
management for your organization via a third-party vendor.
Jive Software strongly recommends using a single signon solution for access to internal communities. In addition
to the out-of-the-box SSO options in the application, our
Professional Services team can create customizations to
meet almost any single sign-on requirement.
For implementation details:
See the Single Sign-On section, or, if you need an SSO
customization, contact your Jive Software account
representative.
Add an extra layer of
security with SSL
External and Internal
Communities
SSL will enable you to encrypt HTTP requests. Over the
past few years it's become more common for public sites
that request a username and password to give the user the
option to browse the site in either HTTP or HTTPS. For
security and ease of use, we believe that authenticated users
should always be browsing the community via HTTPS
because it's become commonplace to browse the Internet
via insecure wifi access points. Any community that allows
its authenticated users to browse via HTTP is open to
session hijacking.
For implementation details:
See Enabling SSL Encryption.
Add VPN
Internal Communities
If you use both SSO login and SSL/HTTPS user access,
you shouldn't need VPN, too. However, VPN-only access
to the community can be configured for your community in
both public and private cloud communities.
For implementation details:
Contact your IT department to set up VPN-only access to
the Jive application.
Prevent spam in your
community
External Communities
Everyone hates spam, and it can also present security risks.
Limit it in your community as much as you can.
For implementation details:
Preventing Spam includes several suggestions for dealing
with spammers and preventing spam in your community.
Understand administrative
permissions and how they
work
External and Internal
Communities
Administrative permissions can be a powerful tool for
limiting who can make changes to your community.
For implementation details:
See the Managing Administrative Permissions section.
| Administering the Platform | 21
Security Recommendation: Applies to:
Description:
Add an extra username/
password verification step
for Admin Console access
via Apache
Apache includes a couple of features that can help you
keep Jive more secure. Jive runs on Tomcat behind an
Apache HTTP web server. You can set up Apache features
such as IP restrictions or basic authentication for specific
URLs using standard Apache HTTP configurations.
The main Apache HTTP configuration file for the Jive
application is /usr/local/jive/etc/httpd/
conf/httpd.conf.
External and Internal
Communities
For requests inside your network, Apache should remain
totally open. The security for specific requests (admin
pages, file attachments, hidden content) is all executed at
the Tomcat/Java level. For every request that comes in,
the user's account is looked up and the permissions are
checked against the specific request. Because of this, users
are only able to access URLs which they have permission
to view. Some system administrators choose to set IP
filtering or basic authentication (via Apache) on the Admin
Console. This is primarily useful for externally-oriented
Jive communities (those that allow employees, as well as
vendors and customers as community users) so that users
are unaware of an Admin Console. There is no security
risk of leaving the /admin URL exposed. If you implement
this, users trying to access any of the Admin Console pages
must successfully enter their external username/password
combo to gain access.
For implementation details:
See Apache's documentation.
Understand the security
of the Jive Genius
Recommender Service
External and Internal
Communities
This cloud-delivered service communicates between your
community and Jive Software via a secure proxy and stateof-the-art encryption protocols.
For more details:
See Jive Genius Security.
Block search robots
External Communities
Search robots can wreak havoc in your community, so it's a
good idea to set up robot blockers.
For implementation details:
See this tutorial.
Security FAQ
Does Jive Software access data from my public cloud instance?
Jive Software aggregates data from our public cloud customer instances. The kinds of data we collect include
usage statistics, user travel patterns, adoption statistics, and other anonymous information. Among other things,
this information helps us to make decisions about future product development and improvement. In addition, your
contract sets forth how we protect your user-generated content (i.e., we access this data solely to provide support and
other services to you as you request).
| Administering the Platform | 22
How does the application prevent cross-site request forgeries (CSRF) using request-based tokens?
Every form throughout the application is protected from CSRF by a token scoped to each request which prevents
forgery attempts. The server requires the token on any request that can change data. If the token is not present or does
not match, the HTTP request will fail.
Are web services tested?
Yes. All web services are tested as part of an automated monthly security scan process.
Why do you zip or compress certain types of files when a user uploads them to Jive?
There are a number of known security issues with Internet Explorer (IE). In particular, IE will attempt to display or
execute a file even if the web server sends an HTTP header indicating that the browser should download, instead
of display, the file. This behavior, also known as "content sniffing" or "MIME sniffing," allows attackers to upload
seemingly okay files (for example, an MS Word file) that actually contain malicious HTML. An IE user would then
attempt to view the file. If the file is not zipped, IE will "sniff" the contents of the file, determine that the file is
HTML, and then attempt to render the HTML instead of opening the file with MS Word.
The following types of files are zipped by Jive when they are attached to content: text/plain and text/HTML. Jive uses
a magic number process to determine the correct MIME type of an uploaded file. For example, if a document called
mydocument.doc is uploaded, the magic number process will validate the document. If the file is actually an HTML
file, then Jive zips the file as a security precaution.
Does Jive use Sun's Java Virtual Machine (JVM)?
Yes. Jive uses Sun's JVM 1.6 and the Java Secure Socket Extension (JSSE), which is FIPS 140-compliant.
Which cryptographic technologies are used in Jive?
Jive supports anything the JVM does at the application level and anything OpenSSL does at the HTTPD level. We
actively use Blowfish/ECB/PKCS5P, AES-256 for symmetric key encryption, SHA-256 for one-way hashing, and we
support and recommend Triple-DES ciphers at the HTTPD server for TLS encrypted channels.
•
•
•
SHA-256 -- Jive user passwords are stored in the database as salted SHA-256 hashes.
AES-256 -- Bridging credentials, License Metering information, and iPhone UDIDs are all encrypted with
AES-256.
Blowfish/ECB/PKCS5Padding -- Used for storing LDAP administrator credentials and OpenSearch credentials in
the database.
If your product uses cryptography, has this cryptography been certified under the Cryptographic
Module Validation Program or is it in the process of CMVP certification? If yes, which
Cryptographic Module Testing (CMT) Laboratory are you using and what is your Cryptographic
Module Security Level?
Jive uses Sun's JVM 1.6 and the Java Secure Socket Extension (JSSE).
Is the product public-key enabled and is it interoperable with the DoD PKI?
Jive supports X.509-based PKI. However, extra configuration steps are required; we recommend a Jive Professional
Services customization.
I am a public cloud hosted customer. Can you encrypt my data at rest?
Yes. We can encrypt your dedicated databases that reside in our hosting data centers. Contact your Jive Software
representative to request this additional service and pricing schedules. Note that this service may require additional
lead time depending on the size and traffic of your community.
| Administering the Platform | 23
Platform Run Book (Linux)
This document covers basic system-administration commands for managing the platform.
Use the information here for tasks that an average system administrator would need to perform without familiarity
of the actual functionality of the system. This guide assumes a basic understanding of common Unix or Linux
commands and concepts. You'll find the application management commands mentioned here documented in
Application Management Commands.
For a higher-level look at the platform, be sure to see the Platform Overview.
Jive HTTPD
The Jive HTTPD service is the main access point for HTTP and HTTPS access to the Jive system by web browser.
Start Jive HTTPD
To start the jive-httpd service, execute the following command as root:
[root@biodome:~]$ /etc/init.d/jive-httpd start
Starting jive-httpd:
OK
If the command completes successfully, an OK message will be printed to the console and the exit code of the
command will be zero.
Restart Jive HTTPD
To restart the jive-httpd service, execute the following command as root:
[root@biodome:~]$ /etc/init.d/jive-httpd restart
Stopping jive-httpd:
OK
Starting jive-httpd:
Stop Jive HTTPD
To stop the jive-httpd service, execute the following command as root:
[root@biodome:~]$ /etc/init.d/jive-httpd stop
Stopping jive-httpd:
OK
If the command completes successfully, an OK message will be printed to the console and the exit code of the
command will be zero.
Monitoring Jive HTTPD
The jive-httpd service supports a "status" command issued to the standard init script located at "/etc/init.d/jive-httpd".
An example of checking the serivce status as the root user:
[root@biodome:~]$ /etc/init.d/jive-httpd status
JIVE_HOME set to /usr/local/jive
Running: 2393 (2396, 2397)
In the above example, the parent process of the jive-httpd system daemon is 2393, with child processes of 2396 and
2397.
| Administering the Platform | 24
In addition to the status script, it is possible to check the status of the jive-httpd daemon using standard Unix
commands. For example, the following ps command will list all jive-httpd processes running on the host:
[root@biodome:~]$ ps -ef | grep jive-httpd | grep -v grep
root
2393
1 0 14:41 ?
00:00:00 /usr/local/jive/httpd/bin/
jive-httpd -f /usr/local/jive/etc/httpd/conf/httpd.conf -k start
jive
2395 2393 0 14:41 ?
00:00:00 /usr/local/jive/httpd/bin/
jive-httpd -f /usr/local/jive/etc/httpd/conf/httpd.conf -k start
jive
2396 2393 0 14:41 ?
00:00:00 /usr/local/jive/httpd/bin/
jive-httpd -f /usr/local/jive/etc/httpd/conf/httpd.conf -k start
jive
2397 2393 0 14:41 ?
00:00:00 /usr/local/jive/httpd/bin/
jive-httpd -f /usr/local/jive/etc/httpd/conf/httpd.conf -k start
jive
2398 2393 0 14:41 ?
00:00:00 /usr/local/jive/httpd/bin/
jive-httpd -f /usr/local/jive/etc/httpd/conf/httpd.conf -k start
jive
2399 2393 0 14:41 ?
00:00:00 /usr/local/jive/httpd/bin/
jive-httpd -f /usr/local/jive/etc/httpd/conf/httpd.conf -k start
Jive HTTPD Networking
The jive-httpd server by default listens for connections on port 80, on all available network interfaces. If configured
for SSL (see the Operations Cookbook), the server will also listen for connections on port 443. The following
commands will show if the jive-httpd service is listening on the designated ports.
[root@melina ~]# lsof -n -i TCP:80 -i TCP:443
COMMAND
PID USER
FD
TYPE DEVICE SIZE NODE
jive-http 3094 root
3u IPv6 30661
TCP
jive-http 3098 jive
3u IPv6 30661
TCP
jive-http 3099 jive
3u IPv6 30661
TCP
jive-http 3100 jive
3u IPv6 30661
TCP
jive-http 3101 jive
3u IPv6 30661
TCP
jive-http 3102 jive
3u IPv6 30661
TCP
jive-http 3104 jive
3u IPv6 30661
TCP
jive-http 3105 jive
3u IPv6 30661
TCP
jive-http 3273 jive
3u IPv6 30661
TCP
jive-http 3274 jive
3u IPv6 30661
TCP
jive-http 3275 jive
3u IPv6 30661
TCP
NAME
*:http
*:http
*:http
*:http
*:http
*:http
*:http
*:http
*:http
*:http
*:http
(LISTEN)
(LISTEN)
(LISTEN)
(LISTEN)
(LISTEN)
(LISTEN)
(LISTEN)
(LISTEN)
(LISTEN)
(LISTEN)
(LISTEN)
In the above example, multiple jive-httpd processes are providing the "http" service. If listening for SSL or TLS
connections, the "https" service will also be present.
Jive HTTPD Log Files
All log files for jive-httpd are stored in the standard platform log directory - /usr/local/jive/var/logs. The following
command illustrates how to view the available logs.
[root@melina logs]# ls -l /usr/local/jive/var/logs/*http*
-rw-r--r-- 1 root root
224 Feb 23 16:12 /usr/local/jive/var/logs/httpderror.log
-rw-r--r-- 1 root root 19454 Feb 26 08:25 /usr/local/jive/var/logs/jivehttpd-access.log
-rw-r--r-- 1 root root
854 Feb 23 16:13 /usr/local/jive/var/logs/jivehttpd-error.log
-rw-r--r-- 1 root root
854 Feb 23 16:13 /usr/local/jive/var/logs/jivehttpd-ssl-access.log
-rw-r--r-- 1 root root
854 Feb 23 16:13 /usr/local/jive/var/logs/jivehttpd-ssl-error.log
In the above example, startup logs are captured to the "httpd-error.log" file. Requests handled by the standard jivehttpd server are maintained in "jive-httpd-access.log" file while errors during normal runtime are captured to "jive-
| Administering the Platform | 25
httpd-error.log". Likewise, SSL or TLS encrypted requests are captured to the corresponding log files with "ssl"
appended to the name of the file.
Jive-Managed Applications
An installation of the Jive platform will host one or more distinct applications. All managed applications have both
system-level scripts that are invoked at system startup and shutdown, as well as scripts locally available to the
"jive" system user created when the platform is installed. The following operations are available for managing and
monitoring managed applications.
Start Jive-Managed Applications
To start all Jive-managed applications, a standard "init" script is avialable for use by the root user. This script is
invoked at system boot to start all managed applications.
[root@biodome:~]$ /etc/init.d/jive-application start
JIVE_HOME set to /usr/local/jive
Starting jive-application:
All applications started successfully (1 total).
In addition to the system scripts, the jive user may start any individual application or all managed applications using
the appstart command. The appstart command is automatically added to the jive user's interactive shell path and may
be run after becoming the jive user (commonly via the "su" command). The following example demonstrates the
root user using the "su" command to become the jive user and then the use of the appstart command to start the "sbs"
application.
[root@biodome:~]$ su - jive
[1456][jive@biodome:~]$ appstart --verbose sbs
Handling applications ['sbs']
Starting sbs...
Executing /usr/local/jive/applications/sbs/bin/manage start
sbs started successfully.
Stop Jive-Managed Applications
To stop all Jive-managed applications, execute the system stop script.
[root@biodome:~]$ /etc/init.d/jive-application stop
JIVE_HOME set to /usr/local/jive
Stopping jive-application:
All applications stopped successfully (1 total).
Similar to the appstart command, the appstop command may be executed to stop managed applications as the jive
user.
[1457][jive@biodome:~]$ appstop --verbose
Stopping sbs...
Executing /usr/local/jive/applications/sbs/bin/manage stop
sbs stopped successfully.
Cleaning sbs application work directory at /usr/local/jive/var/work/sbs.
All applications stopped successfully (1 total).
Monitoring Jive-Managed Applications
To show all running Jive-managed applications, execute the appls command with the "--running" flag as the jive user
as in the following example.
[1507][jive@biodome:~]$ appls --running
| Administering the Platform | 26
stage
running (pid=2799)
In this example, the "stage" application is currently running with a process ID of 2799. To monitor the individual
process, standard tools like the "ps" command can be used with the process ID from appls output as in the following
example.
[1542][jive@biodome:~]$ ps -ef | grep 2799 | grep -v grep
jive
2799
1 0 15:06 pts/0
00:00:16 /usr/local/jive/java/
bin/java -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true
-Djava.net.preferIPv4Stack=true -Xloggc:/usr/local/jive/var/logs/
stage-gc.log -Xmx2048m -Xms2048m -XX:MaxPermSize=512m -Djive.home=/
usr/local/jive -Djive.instance.home=/usr/local/jive/applications/
stage/home -Djive.name=stage -Djive.context=/stage -Djive.logs=/usr/
local/jive/var/logs -Djive.application=/usr/local/jive/applications/
stage/application -Djive.work=/usr/local/jive/var/work/stage Djive.app.cache.ttl=10000 -Djive.app.cache.size=10240 -Dserver.port=9500
-Dhttp.addr='127.0.0.1' -Dhttp.port=9502 -Dajp.addr=127.0.0.1
-Dajp.port=9501 -Dajp.buffer.size=4096 -Dajp.max.threads=50 Dlog4j.configuration=file:///usr/local/jive/applications/stage/conf/
log4j.properties -Dtangosol.coherence.clusteraddress='224.224.224.224'
-Dtangosol.coherence.clusterport=9503 -Dcatalina.base=/usr/local/jive/
applications/stage -Dcatalina.home=/usr/local/jive/tomcat -Djava.io.tmpdir=/
usr/local/jive/var/work/stage -classpath /usr/local/jive/applications/stage/
bin//bootstrap.jar:/usr/local/jive/applications/stage/bin/tomcat-juli.jar::/
usr/local/jive/java/lib/tool.jar org.apache.catalina.startup.Bootstrap start
Alternatively, the following example combines both operations into a single command.
[1539][jive@biodome:~]$ ps -ef | grep 'appls --running | awk -F'=' '{print
$2}' | tr -cd '[:digit:]''
jive
2799
1 0 15:06 pts/0
00:00:16 /usr/local/jive/java/
bin/java -XX:+PrintClassHistogram -XX:+PrintTenuringDistribution
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC -Djava.awt.headless=true
-Djava.net.preferIPv4Stack=true -Xloggc:/usr/local/jive/var/logs/
stage-gc.log -Xmx2048m -Xms2048m -XX:MaxPermSize=512m -Djive.home=/
usr/local/jive -Djive.instance.home=/usr/local/jive/applications/
stage/home -Djive.name=stage -Djive.context=/stage -Djive.logs=/usr/
local/jive/var/logs -Djive.application=/usr/local/jive/applications/
stage/application -Djive.work=/usr/local/jive/var/work/stage Djive.app.cache.ttl=10000 -Djive.app.cache.size=10240 -Dserver.port=9500
-Dhttp.addr='127.0.0.1' -Dhttp.port=9502 -Dajp.addr=127.0.0.1
-Dajp.port=9501 -Dajp.buffer.size=4096 -Dajp.max.threads=50 Dlog4j.configuration=file:///usr/local/jive/applications/stage/conf/
log4j.properties -Dtangosol.coherence.clusteraddress='224.224.224.224'
-Dtangosol.coherence.clusterport=9503 -Dcatalina.base=/usr/local/jive/
applications/stage -Dcatalina.home=/usr/local/jive/tomcat -Djava.io.tmpdir=/
usr/local/jive/var/work/stage -classpath /usr/local/jive/applications/stage/
bin//bootstrap.jar:/usr/local/jive/applications/stage/bin/tomcat-juli.jar::/
usr/local/jive/java/lib/tool.jar org.apache.catalina.startup.Bootstrap start
List Jive-Managed Applications
A list of all managed applications can be obtained by executing the appls command as the jive user as shown in the
following example.
[1507][jive@biodome:~]$ appls
stage
running (pid=2799)
development
stopped (pid=None)
| Administering the Platform | 27
In the output above, the "stage" application is running with process ID 2799, the "development" application is not
running.
Jive-Managed Application Networking
The network ports and addresses used by a managed Jive application will vary depending on usage. The default Jive
application will work on the following addresses and ports.
Service
Protocol
Address
Application server management
TCP
127.0.0.1:9000
HTTP
TCP
127.0.0.1:9001
AJP
TCP
127.0.0.1:9002
Multicast Cluster
UDP/Multicast
224.224.224.224:9003
Note that managed applications should not be accessed directly via the HTTP 9001 port and it is recommended that a
firewall prevent access to that port. Its existence is for troubleshooting and support purposes only and is not intended
for production use.
To validate that the TCP services are present for a default install, execute the following command.
[root@melina ~]# lsof -n -P | grep
java
3204
jive
30u
127.0.0.1:9001 (LISTEN)
java
3204
jive
31u
127.0.0.1:9002 (LISTEN)
java
3204
jive
39u
127.0.0.1:9000 (LISTEN)
jive | grep java | grep LISTEN
IPv6
31631
TCP
IPv4
31632
TCP
IPv4
38046
TCP
Jive-Managed Application Logs
Log files for Jive-managed applications are located in the var/logs directory of the jive user's home directory (/
usr/local/jive/var/logs). The following log files can be consulted for further information on the status of individual
applications. Each file will be prefixed with the name of the corresponding application. For example, for the "stage"
application, the container log file will be named "stage-container.log".
•
•
•
•
•
<name>.log - Primary log file for a managed application; most log entries will be located here.
<name>-container.log - Early bootstrap log file for the application server container hosting the web application.
<name>-session.log - Log file capturing creation and eviction of user session data.
<name>.out - Redirection of standard out and standard error for the application process; may contain data not in
the main log file.
<name>-gc.log - Java garbage collection logs for the application.
Jive Database Server
The Jive platform ships with a local PostgreSQL database server. The following operations are available for the
database server.
Important: The pre-packaged PostgreSQL DBMS is for evaluation purposes and should not be used for
production instances.
Start Jive Database Server
To start the database server, execute the following system command as the root user.
[root@biodome:~]$ /etc/init.d/jive-database start
JIVE_HOME set to /usr/local/jive
| Administering the Platform | 28
Starting jive-database:
server starting
Stop Jive Database Server
To stop the database server, execute the following system command as the root user.
[root@biodome:~]$ /etc/init.d/jive-database stop
JIVE_HOME set to /usr/local/jive
Stopping jive-database:
waiting for server to shut down.... done
server stopped
Note that stopping the database while managed applications are using the database will result in applications that
cannot service requests. Additionally, stopping the database while applications are connected may result in a lengthy
shutdown time or a failed shutdown.
Monitoring Jive Database Server
Monitoring the database server can be done as the root user with system scripts, or with traditional Unix commands.
To check the status of the jive database, execute the following command as the root user.
[root@biodome:~]$ /etc/init.d/jive-database status
pg_ctl: server is running (PID: 3211)
/usr/local/jive/postgres/bin/postgres "-D" "/usr/local/jive/var/data/
postgres-8.3"
The output of the above command lists the parent process of the database system (3211 in this example) and shows
the command used to start the database.
A healthy, running database system will have multiple processes. The following command will show all running
database processes on the system:
[root@biodome:~]$ ps -ef | grep post | grep -v grep
jive
3211
1 0 17:13 ?
00:00:00 /usr/local/jive/postgres/
bin/postgres -D /usr/local/jive/var/data/postgres-8.3
jive
3214 3211 0 17:13 ?
00:00:00 postgres: writer process
jive
3215
3211
0 17:13 ?
00:00:00 postgres: wal writer process
jive
3216 3211
launcher process
jive
3217 3211
0 17:13 ?
00:00:00 postgres: autovacuum
0 17:13 ?
00:00:00 postgres: archiver process
jive
process
0 17:13 ?
00:00:00 postgres: stats collector
3218
3211
Jive Database Server Networking
In the default configuration, the included database service listens for connections on TCP address 127.0.0.1 port 5432.
To verify that the database is listening for connections, execute the following command.
[root@melina ~]# lsof -n -P | grep jive | grep postgres | grep LISTEN
postgres
2990
jive
3u
IPv4
21499
TCP 127.0.0.1:5432 (LISTEN)
Jive Database Server Logs
Logs for the database server are maintained in the platform log directory at "/usr/local/jive/var/logs/postgres.log".
| Administering the Platform | 29
Operations Cookbook
This section is intended to provide sample configurations and script examples common to long-term operation of a
Jive installation.
As opposed to the Platform Run Book (Linux), these operations are common to a new installation, but generally not
for day-to-day operation of the platform.
Enabling SSL Encryption
The Jive platform is capable of encrypting HTTP requests via SSL or TLS. Enabling encryption of HTTP traffic
requires the following steps on a platform-managed host.
Note: To ensure consistent results, you should enable SSL for your UAT environment as well as your
production instance of Jive. As of Jive 5, the Apps Market requires an additional domain. To properly
test and implement SSL, then, you'll need certificates for community.yourdomain.com and
apps.community.yourdomain.com (Production) as well as community-uat.yourdomain.com
and apps.community-uat.yourdomain.com (UAT). To secure these domains, you should purchase
two Multiple Domain UC certificates with SAN entries for the Apps domain. If you're a hosted customer, you
can contact Support instead of using the steps below to apply the certificates.
1. Copy cryptographic materials to the host. By default, the Jive HTTPD server attempts to load an X.509 certificate
file from the path /etc/jive/httpd/ssl/jive.crt and the corresponding key from /etc/jive/httpd/ssl/jive.key. The
paths to these files are configured in the default Apache HTTPD virtual host file located at /etc/jive/httpd/sites/
default.conf and can be changed to any path desired.
2. Import the jive.crt into the Java Tomcat keystore. For example, run the following command as root, then restart
the application:
/usr/local/jive/java/jre/bin/keytool -import -alias jiveCert -file /usr/
local/jive/etc/httpd/ssl/jive.crt -keystore /usr/local/jive/java/jre/lib/
security/cacerts
3. Enable SSL in the HTTPD server by specifying the -D SSL option in the Apache HTTPD configuration extension
file located at /etc/jive/conf/jive-httpd. To enable SSL, open (or create) this file and add OPTIONS="-D SSL" to
the file.
4. With either Jive's HTTP server or behind a third-party load balancer, add three attributes to the file at /usr/
local/jive/applications/<app_name>/conf/server.xml. To the first (HTTP) /Server/Connector element, add this:
scheme="https" proxyPort="443" proxyName="your.domain.com" -- where your.domain.com is the domain
of your application.
5. After making the changes above, restart the Jive HTTPD server as described in the runbook for Linux. Restart the
Tomcat server.
6. Update the jiveURL in the Admin Console: System Management > System Properties.
Note: Except where noted above, if a third-party load balancer or external HTTP proxy is performing SSL
termination upstream of the Jive HTTPD server, it is not necessary to configure the Jive HTTPD server for
HTTP encryption in addition to the load balancer.
Note: If the private key file installed to the server is encrypted, the HTTPD server will interactively prompt
for the password to decrypt the key. The default password is changeit.
Configuring a Persistent Session Manager
Configure a persistent session manager by modifying the context.xml file.
The Jive package includes a persistent session manager. To use it, you'll need to edit the /usr/local/jive/
applications/sbs/conf/context.xml file to include your core database's relevant information. If you've
installed Jive on a cluster, you'll need to modify the context.xml file for each node of the cluster.
| Administering the Platform | 30
In the context.xml file, add a <Context> method that includes your database information. Here is an example:
<!-- The contents of this file will be loaded for each web application -->
<Context>
<!-- Default set of monitored resources -->
<WatchedResource>WEB-INF/web.xml</WatchedResource>
<!-- prevent tomcat from saving HTTP session -->
<Manager
className="com.jivesoftware.catalina.session.JivePersistentManager"
saveOnRestart="true" maxActiveSessions="-1" minIdleSwap="-1"
maxIdleSwap="-1" maxIdleBackup="1" processExpiresFrequency="1">
<Store
className="com.jivesoftware.catalina.session.store.JiveJDBCSessionStore"
driverName=""
connectionURL=""
connectionName=""
connectionPassword=""
sessionTable="jivesession" />
</Manager>
</Context>
Forcing Traffic to HTTPS
You can configure the platform so that all requests are routed through HTTPS by default.
Before going through the following steps, be sure to enable SSL (HTTPS). For more information, please see Enabling
SSL Encryption.
1. Locate the file /usr/local/jive/etc/httpd/sites/default.conf and open it with with a text editor.
2. In the file, look for the text "RewriteEngine On". After that line, add the following lines:
RewriteCond %{HTTPS} off
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
3. Restart the Jive HTTPD (Linux) service by running the restart command as root.
Restricting Admin Console Access by IP Address
You can secure the admin console by allowing or denying specific IP addresses.
To specify who can access the admin console based on IP address:
1. Locate the /usr/local/jive/etc/httpd/sites/default.conf file.
2. Allow or deny IP addresses by adding and modifying the following code.
<Location /admin> Order Deny,Allow Allow from <IP ADDRESS> Allow from <IP
ADDRESS> Allow from <IP ADDRESS> Allow from <IP ADDRESS> Allow from <IP
ADDRESS> Deny from all </Location>
Disabling the Local Jive System Database
Many deployments will not wish to use the locally managed platform database instead, choosing to use an RDBMS
that is controlled by an internal corporate IT group. In this case, the Jive local database should be disabled. To disable
the database, as the root user, execute the following script:
The following terminal output demonstrates deactivation and of the Jive database service:
[root@biodome ~]# /etc/init.d/jive-database deactivate
Jive Database deactivated.
[root@biodome ~]# /etc/init.d/jive-database activate
| Administering the Platform | 31
Jive Database activated. The database will start automatically on the next
system restart.
Note: Disabling the database does not stop the service if it is running. Likewise, re-enabling the database
does not start the database service. Also, disabling the local system database will unscheduled all standard
local system database maintenance tasks.
Changing the Configuration of an Existing Instance
Update environment variables in your /usr/local/jive/applications/<app_name>/bin/instance
file to reflect new configuration settings.
In some circumstances, it may be desirable to change the default configuration of platform-managed application
server instances. For example, on a larger server-class machine, an application instance will benefit from allocation of
more RAM for the JVM heap.
To change this or other settings, edit the instance file for the desired application (sbs by default) located at /
usr/local/jive/applications/<app_name>/bin/instance.
The contents of this file will vary from release to release. Generally, the entries in this file correspond to either:
•
•
Environment variable values in the setenv script located in the same directory
Tokenized configuration attributes for the conf/server.xml file in the application directory
For any managed application, all files except the binaries for the web application (by default, each application is
linked to these binaries located at /usr/local/jive/applications/template/application) are not
managed by the application platform. As a result, any changes to files such as instance will be durable across
application upgrades.
Changing the Port
As an example, to change the port that the managed application listens for AJP connections, edit the instance file
to alter the port for AJP_PORT.
Prior to edit, the instance file will look similar to the following.
[0806][jive@melina:~/applications/sbs/bin]$ cat instance
export JIVE_HOME="/usr/local/jive"
export AJP_PORT="9002"
export APP_CLUSTER_ADDR="224.224.224.224"
export JIVE_APP_CACHE_TTL="10000"
export APP_CLUSTER_PORT="9003"
export HTTPD_ADDR="0.0.0.0"
export AJP_BUFFER_SIZE="4096"
export HTTP_ADDR="127.0.0.1"
export JIVE_APP_CACHE_SIZE="10240"
export SERVER_PORT="9000"
export JIVE_NAME="sbs"
export HTTP_PORT="9001"
export AJP_ADDR="127.0.0.1"
export JIVE_CONTEXT=""
export AJP_THREADS_MAX="50"
To alter the AJP_PORT to listen on port 11000, edit the instance file to appear similar to the following:
[0806][jive@melina:~/applications/sbs/bin]$ cat instance
export JIVE_HOME="/usr/local/jive"
export AJP_PORT="11000"
export APP_CLUSTER_ADDR="224.224.224.224"
export JIVE_APP_CACHE_TTL="10000"
export APP_CLUSTER_PORT="9003"
export HTTPD_ADDR="0.0.0.0"
export AJP_BUFFER_SIZE="4096"
| Administering the Platform | 32
export
export
export
export
export
export
export
export
HTTP_ADDR="127.0.0.1"
JIVE_APP_CACHE_SIZE="10240"
SERVER_PORT="9000"
JIVE_NAME="sbs"
HTTP_PORT="9001"
AJP_ADDR="127.0.0.1"
JIVE_CONTEXT=""
AJP_THREADS_MAX="50"
Changing the Heap Min/Max Values
To change the JVM min/max values, see Adjusting Java Virtual Machine (JVM) Settings.
Configuring the JVM Route Name of a Node(s)
To configure the route name of your web application node(s), add a line(s) to the instance file in /usr/local/
jive/applications/<app_name>/bin as follows, where "node01" is your desired route name:
export APP_CLUSTER_JVMROUTE="node01"
When configuring multiple nodes with jvmRoute attributes, each node should have a different value.
Performing a Jive System Database Backup
Jive-managed databases will perform automatic backups as described in Auto Backups in the Platform Overview.
In some situations, for example prior to an upgrade of the platform, you should perform a full database backup
manually.
Important: The pre-packaged PostgreSQL DBMS is for evaluation purposes and should not be used for
production instances.
To manually perform a full backup of the managed database, execute the dbbackup script as the jive user.
[0801][jive@melina:~]$ ./bin/dbbackup
/bin/tar: Removing leading '/' from member names
The command will not produce any further output if successful and will return zero if successful, non-zero otherwise.
You can restore from a backup by using PostgreSQL commands. In the PostgreSQL documentation, the section on
recovering from your backup is probably the most specific. For a broader view, be sure to see the contents of their
documentation on backing up and restoring.
Performing Database Maintenance
Any time you restart or shutdown your Jive database, you must restart the Jive web application nodes as follows:
1. Take down all the web application nodes.
2. Restart or shut down the database.
3. Bring up all the web application nodes.
Backup and Storage Considerations
Storage Reliability
It is highly recommended that the Jive system home directory (/usr/ local/jive) be mounted on redundant external
storage (preferably SAN storage via redundant HBAs and SAN fabric). When redundant external storage is not
available, the local system volume for /usr/local/jive should be mirrored across multiple physical disks to prevent the
loss of a single disk as a single point of failure.
| Administering the Platform | 33
The total storage requirements for this directory will vary from installation to installation. As a basic guide for
capacity planning, consider the following:
•
•
•
•
•
Core binaries - The base installation requires 500MB storage (200MB on disk, an additional 300MB needed
during upgrades of the platform).
Total system traffic - The system writes all logs to /usr/local/jive/ var/logs. While the system will by default rotate
log files to reduce disk space consumed, larger installations may wish to retain log files for analysis over time
(HTTPD access logs for example). In a default installation, allocating 5GB for log storage should provide ample
room to grow.
Cache efficiency - For each application, local caches of binary content including attachments and images are
maintained. The more space available to those caches, the more efficient the system will be at serving binary
requests and the smaller the strain on the backing RDBMS. As a capacity guideline, plan on roughly .25 the
planned total binary (BLOB) storage in the RDBMS for efficient caching.
Search index size - Each node stores local copies of the system search index. As a general rule of thumb, plan for
search indexes to be 1x the total database storage consumption (.5 for active indexes, . 5 for index rebuilds).
Local database backups - When using the Jive platform-managed database, the database will regularly be backed
up to /usr/local/jive/ var/data/backup/full and database checkpoint segments backed up to / usr/local/jive/var/data/
backup/wal. When an instance is using this database, approximately 35x the total database size will be required
in the /use/local/jive/var/data/backup location with a default configuration. This number can be lowered by more
aggressively removing full backup archives stored in backup/full and by more aggressively removing WAL
segments after a full backup has been performed.
Storage Monitoring
As with any system, disk consumption should be regularly monitored and alerts generated when the system
approaches disk capacity. Most disk consumption will occur in three areas:
•
•
•
Application instance home directory -- By default, the platform manages a single application instance located at /
usr/local/jive/applications/sbs with a home directory of sbs/home
Platform logs -- All platform log files are stored in /usr/local/jive/ var/logs
Platform database -- If the local platform database is used, data files will be stored in /usr/local/jive/var/data/
postgres-8.3 and backups in /usr/local/jive/var/data/backup
Important: The pre-packaged PostgreSQL DBMS is for evaluation purposes and should not be used for
production instances.
System Backups
In addition to performing regular backups of reliable storage, you should perform backups of the Jive system home.
The most simple backup solution is to simply backup the entire contents of /usr/local/jive. A more selective option
is to backup only /usr/local/jive/applications and /usr/local/jive/etc. In either case, you should make backups in
accordance with standard backup practices.
Before upgrading Jive, you should make a full backup of /usr/local/jive.
When you're using the platform-managed database, it's a good idea to maintain copies of /usr/local/jive/var/data/
backup on a separate storage volume that's immune from corruption that may occur on the /usr/local/jive volume.
System Database Credentials
The Jive local system database is intended for use only as the application's main database. Under most circumstances
you shouldn't need to separately connect to it. For those cases when you do, the default connection information is
listed below.
Important: The pre-packaged PostgreSQL DBMS is for evaluation purposes and should not be used for
production instances.
•
•
Connection URL: jdbc:postgresql://localhost:5432/sbs
User name: sbs
| Administering the Platform | 34
•
Password: Passwords for database accounts are generated during installation and written to hidden files in /usr/
local/jive/etc/postgres/. For example, you'll find the password for the local system database in /usr/local/jive/etc/
postgres/.cs-password
Using an External Load Balancer
In order to integrate the Jive platform with external load balancers, configure the load balancer for cookie-based
session affinity between each host running the platform. (All of the testing performed by Jive Software of load
balancers is cookie-based.) If the load balancer is performing SSL session termination (recommended), configure
the load balancer to route traffic to port 80 of each Jive-managed server. If the load balancer is not performing SSL
session termination, configure the load balancer to route traffic to port 443 and each server configured for SSL as
described in Enabling SSL Encryption.
Depending on the load balancer, it may be necessary to add JVM route information to the outgoing JSESSIONID
HTTP cookies sent to remote agents. For information about using Apache HTTPD as a load balancer, see Apache's
documentation about load balancer stickyness. To understand how to configure the route name (jvmRoute variable)
of your node(s) in Jive, see the "Configuring the Route Name of a Node(s)" section of Changing the Configuration of
an Existing Instance.
Some load balancers require a "magic" HTML file in the site root to make the node available. If your load balancer
requires this, add the following line to this default configuration file /usr/local/jive/etc/httpd/sites/
default.conf:
ProxyPass /magicfile.html !
To learn more about Apache's ProxyPass and how it works, see their documentation.
Enable Application Debugger Support
Applications managed by Jive are capable of accepting remote Java debuggers. To enable debugging, export
environment variables “DEBUG” and “JPDA_TRANSPORT” prior to starting the managed application to be
debugged.
For example, to debug via remote socket connection, start the desired application as shown below.
[0832][jive@melina:~]$ export DEBUG=1 && export JPDA_TRANSPORT=dt_socket &&
appstart sbs
Note that only one managed application may be debugged at a time. When running in DEBUG mode, the application
JVM will halt until a debugger is attached.
Setting Up Document Conversion
Some documents -- including PDFs and those from Microsoft Office -- are supported in a preview view in Jive. If
you want to convert content from its native format into a form that can be previewed without altering the original
document, you'll need the Document Conversion module, which you'll need to deploy on a server that is separate
from your core Jive production instances.
We support converting the following file types on Office 2003 and 2007:
•
•
•
•
•
•
•
doc
ppt
docx
pptx
xls
xlsx
pdf
| Administering the Platform | 35
Note: For information about managing conversion attempts and reconverting documents if necessary, see
Managing Document Conversion.
Here is an overview of the steps you'll perform to set up Document Conversion:
1. Set up a production instance of the Jive application (see Installing the Linux Package. When you've
finished configuring and customizing it, disable the Document Conversion service with service jivedocconverter stop.
2. Install the application on your conversion node machine. Then disable the services not related to document
conversion. For more information, see Installing and Configuring on the Conversion Machine
3. On the application node, configure the application to communicate with the conversion machine(s).
4. If you want to set up secure communication to the conversion machine, see Setting Up SSL for Document
Conversion.
Configuration Support for External Client Access
Jive optionally supports access to the community from several kinds of clients. This topic describes how to ensure
that these connections are secure.
The following clients might need secure access setup:
•
•
Mobile devices. Jive supports browser-based access using a range of mobile devices. See the Jive Mobile
documentation for more information.
Bridged instances. Using a bridge, you can connect two Jive communities together. Through this bridge, people in
one community (who are members the bridged communities) can see activity from the bridged communities. It's
possible that the bridge might cross network boundaries.
An add-in within Microsoft Office. Jive offers the Jive Desktop Office add-in through which people can upload
and synchronize Office documents while working within the Office application on their desktop.
•
Features designed to help you ensure secure access include:
•
•
URL conventions that you can use to filter requests. See below for more on these.
Admin console support for turning REST web services on or off. For more, see Setting Access for Web Service
Clients.
URL Conventions
Each of the client types listed above requires access to the community in order to exchange data about content,
people, and activity. Each communicates with the community using Representational State Transfer (REST) web
services. To help you secure that access, Jive uses a URL convention that you can use to filter requests so that only
those relevant to the services you've supported are allowed. Each client uses a different base URL to make requests.
Each REST URL for the clients listed begins with __services and is followed by a convention specific to the
client type, i.e., /mobile, /bridging, and /office. Using this convention you can filter access to permit
valid requests. For example, imagine that you want to allow a bridge from a public community outside your firewall
to a private community that's inside it. You could create a filter that permits URLs of the form /__services/
bridging/**. Depending on your network topology and conventions, you could permit these URLs through your
firewall, or you could set up a reverse proxy that would forward requests made to these URLs.
The following table lists base URLs for each client type:
Client
Base URL
Notes
iPhone
/__services/mobile/v1/
Services can be enabled or disabled in the
admin console. Only applies to Jive versions
4.5.5 and earlier.
Bridged instance
/__services/bridging/
Services can be enabled or disabled in the
admin console.
| Administering the Platform | 36
Client
Base URL
Notes
Jive Desktop Office
add-in
/__services/office/
Services can not be disabled via the console
if the feature is installed.
Security for Client Requests
You can enable Secure Sockets Layer (SSL) for each type of client, although how you do so varies among the clients.
The following table describes how SSL is enabled for each client type:
Client
SSL Handling
iPhone
You can force SSL specifically for the iPhone from the admin console as described in
Setting Access for Web Service Clients. Note that if you'll be using SSL to secure iPhone
access, your certificate must be valid. For example, it must be created by a trusted authority
such as Verisign, rather than self-created. Only applies to Jive versions 4.5.5 and earlier.
Bridged instance
You force SSL for access from bridges when you force it for REST web services in general.
See Setting Access for Web Service Clients for more information about that setting.
Jive Desktop Office
add-in
To force SSL for the Jive Desktop add-in, you must force SSL for the entire site, including
for browser-based requests. For more information, see Enabling SSL Encryption.
Adding Fonts to Support Office Document Preview
Install your licensed True Types fonts on the document conversion server to enable accurate previews of uploaded
Microsoft Office documents. Having the correct fonts installed will enable the proper display of languages such as
Chinese, Japanese, Korean, and Arabic.
Note: If your Jive community is hosted by Jive, this custom font feature is not supported.
Locate the font package(s) you want to install.
Connect to the document conversion server as root.
Using the operating system's package manager, install the fonts on the document conversion server.
The font(s) should now have been added to fontconfig on your system. You can verify that a particular font is
installed and ready to be used by the document conversion service by typing fc-list and making sure the font is
listed.
5. As root, restart the document conversion service (/etc/init.d/jive-docconverter restart).
1.
2.
3.
4.
Monitoring Your Jive Environment
Set up your monitoring systems so that you're alerted before things go wrong.
Jive Software strongly recommends that system administrators set up monitoring systems for Jive platforms that are
deployed on-premise. (Monitoring for hosted customers is performed automatically by Jive Software).
Monitoring the health of the nodes in your Jive deployment and setting up system alerts can help you avoid costly
community downtime, and can also be helpful for correctly sizing the hardware of your deployment. To understand
how to properly size your community, be sure to read Deployment Sizing and Capacity Planning.
In addition, usage trends may help you better diagnose and anticipate issues. Be sure to read Tracking Usage with
Analytics for more information.
Basic Monitoring Recommendations
Here are some monitoring recommendations that are relatively easy to implement.
| Administering the Platform | 37
Consider monitoring the following items using a monitoring tool such as check_MK, Zenoss, Zyrion, IBM/Tivoli, or
other monitoring tool(s). Polling intervals should be every five minutes.
Caution: If you are connecting Jive to other resources such as an LDAP server, SSO system, SharePoint,
and/or Netapp storage, we strongly recommend setting up monitoring on these external/shared resources.
Most importantly, if you have configured Jive to synchronize against an LDAP server, or if you have
configured Jive to authenticate against an SSO, we strongly recommend that you configure monitoring and
alerting on that external resource so that you can properly troubleshoot login issues. At Jive Software, we see
outages related to the LDAP server not being available in our hosted customer environments.
Node
What you should
monitor
Why you should monitor it
On all
nodes
•
These checks help you monitor all the basics and should be useful for
troubleshooting. We recommend performing each of the following checks every
five minutes on each server.
•
•
•
•
•
Memory
utilization
CPU load
Disk space
Disk I/O activity
Network traffic
Clock accuracy
•
•
•
•
•
Jive web We recommend
application(s)
running a synthetic
health check
against your Jive
application (using
a tool such as
WebInject).
•
•
Individual web
application
server
Through the
load balancer's
virtual IP
address
Memory utilization: If your memory utilization is consistently near 75%,
consider increasing the memory.
CPU load: On healthy web application nodes, we typically see CPU load
between 0 and 10 (with 10 being high). In your environment, if the CPU load
is consistently above 5, you may want to get some thread dumps using the
appsnap command, and then open a support case on the Jive Community.
Disk space: On the web application nodes, you'll need enough disk space for
search indexes (which can grow large over time) and for attachment/image/
binary content caching. The default limit for the binstore cache is 512MB
(configurable from Admin console: System > Settings > Storage Provider).
We recommend starting with 512MB for the binstore cache. Note that you
also need space for generated static resources.
Network traffic: While you may not need a specific alert for this, monitoring
this is helpful for collecting datapoints. This monitor can be helpful for
understanding when traffic dropped off.
Clock accuracy: In clustered deployments, ensuring the clocks are accurate
between web application nodes is critical. We strongly recommend using NTP
to keep all of the server clocks in sync.
WebInject interacts with the web application to verify basic functionality. It
provides functional tests beyond just connecting to a listening port. Checking
individual servers, as well as the load balancer instance, verifies proper load
balancer behavior. We recommend setting these checks every five minutes
initially. To minimize false alarms, we require two failures before an alert is sent.
If you find that these settings are resulting in too many false alarms, then adjust
your settings as needed.
We recommend setting up WebInject tests that perform the following:
•
•
•
request the Admin Console login page (this verifies that Apache and Tomcat
are running)
log in to the Admin Console (this verifies that the web application node can
communicate with the database server)
request the front-end homepage (this verifies at a high level that everything is
okay)
For an example of WebInject XML code that will perform all of the above, see
WebInject Code Example.
Cache
server
•
Java
Management
JMX provides a means of checking the Java Virtual Machine's heap size for
excessive garbage collection. Disk space checks ensure continued logging.
| Administering the Platform | 38
Node
What you should
monitor
•
Extensions
(JMX) hooks
(heap)
Disk space
(logs)
Databases Stats for:
(Activity
• Connections
Engine,
Analytics, • Transactions
• Longest query
and web
time and slow
application)
queries
Why you should monitor it
•
Database checks will show potential problems in the web application server
which can consume resources at the database layer (such as excessive open
connections to the database).
•
Verify ETLs are
running
Disk space
Disk I/O activity
•
•
•
•
•
Document •
conversion •
•
Tomcat I/O
Heap
Queue statistics
(e.g., average
Heap: If your heap is consistently near 75%, consider increasing the heap
size. To learn how, be sure to read Adjusting the Java Virtual Machine (JVM)
Settings on a Cache Server.
Connections: More connections require more memory. If you're constantly
seeing the number of connections spike, consider adding more memory to the
database server and make sure that the database server has enough memory
to handle the database connections. The number of connections will be a
function of what the min/max settings are on each of the web application
nodes. (To learn how to set those, see Getting Basic System Information).
Out-of-the-box settings for database connections are 25 minimum, 50
maximum. For high-traffic sites in our hosted environment, we set that to
25/125. Note that additional nodes should be used instead of more database
connections for managing additional traffic.
Transactions: If the database provides an easy way to measure this number, it
can be helpful for understanding overall traffic volume. However, this metric
is less important than monitoring the CPU/memory/IO utilization for capacity
planning and alerting.
Longest query time and slow queries: It's helpful to monitor slow query
logs for the database server that they're provisioned against. In our hosted
(PostgreSQL) deployments, we log all slow queries (queries that take more
than 1000ms seconds) to a file and then monitor those to help find any queries
that might be causing issues that could be helped by database indexes.
Verify ETLs are running: This is important only for the Analytics database.
The easiest way to monitor this is by querying the jivedw_etl_job
table with something like this: select state, start_ts, end_ts
from jivedw_etl_job where etl_job_id = (select
max(etl_job_id) from jivedw_etl_job); If the state is 1,
the ETL is running. If any state is 3, there is a hard failure that you need to
investigate. If the difference between start_ts and end_ts is too big, you
may need to increase the resources for the Analytics database.
Disk space: On the web application nodes, you'll need enough disk space for
search indexes (which can grow large over time) and for attachment/image/
binary content caching. The default limit for the binstore cache is 512MB
(configurable from Admin console: System > Settings > Storage Provider).
We recommend starting with 512MB for the binstore cache. Note that you
also need space for generated static resources. The most critical place to
monitor disk space is on the database server; you should never have less than
50% of your disk available. We recommend setting an alert if you reach more
than 50% disk utilization on the database server.
Disk I/O activity: This is good to record because it can be important if you see
slow performance on the web application node(s) and excessive wait time.
The various service statistics are exposed via JMX's mbean and can be accessed
the same way as JMX on the web application node's Tomcat's Java Virtual
Machine.
| Administering the Platform | 39
Node
What you should
monitor
•
•
Activity
Engine
•
•
•
Why you should monitor it
length and wait
times)
Running
OpenOffice
service statistics
Overall
conversion
success rate for
each conversion
step
Activity Engine
service
Java
Management
Extensions
(JMX) hooks
(heap) and ports
Queue statistics
(e.g., average
length and wait
times)
JMX provides a means of checking the Java Virtual Machine's heap size for
excessive garbage collection. Disk space checks ensure continued logging.
•
•
Heap: If your heap is consistently near 75%, consider increasing the heap
size. To learn how, be sure to read Adjusting the Java Virtual Machine (JVM)
Settings.
To understand more about the queue depths for the Activity Engine, see
Configuring the Activity Engine.
WebInject Code Example
Here is an example of XML code for WebInject that will perform several basic checks on a web application node.
Note: To learn more about monitoring, be sure to read: Monitoring Your Jive Environment.
This script is designed to perform the following checks on a web application node:
•
•
•
•
request the admin console login page (this verifies that Apache and Tomcat are running) (case id="1")
log in to the admin console (this verifies that the web application node can communicate with the database server)
(case id="2")
request the front-end homepage (this verifies at a high level that everything is okay) (case id="3")
request the index page (case id="4")
In addition, consider monitoring the time it takes this check to run and set an alert threshold at N seconds to ensure
this check succeeds in a timely manner.
<testcases repeat="1">
<testvar varname="BASEURL">http://my-jive-instance.my-domain.com:80</
testvar>
<testvar varname="LOGIN">admin</testvar>
<testvar varname="PASSWORD">admin-password\</testvar>
<case
id="1"
description1="Hit main page"
description2="Verify 'SBS' exists on page"
method="get"
url="${BASEURL}/admin/login.jsp?url=main.jsp"
verifypositive="SBS"
/>
| Administering the Platform | 40
<case
id="2"
description1="Log in as admin user"
description2="Follow redirect"
method="post"
url="${BASEURL}/admin/admin_login"
postbody="url=main.jsp&login=false&username=${LOGIN}&password=
${PASSWORD}"
verifyresponsecode="302"
parseresponse="Location:|\n"
/>
<case
id="3"
description1="Get main.jsp"
description2="Check for 'System'"
method="get"
url="{PARSEDRESULT}"
verifypositive="System"
/>
<case
id="4"
description1="Get index.jspa"
description2="Check for 'Welcome'"
method="get"
url="${BASEURL}/index.jspa"
verifypositive="Welcome|Location: ${BASEURL}/wizard-step\!input.jspa|
Location: .*/terms-and-conditions\!input.jspa"
/>
</testcases>
Advanced Monitoring Recommendations
These advanced monitoring recommendations require intermediate experience with monitoring systems.
Consider monitoring the following items using a monitoring tool such as check_MK, Zenoss, Zyrion, IBM/Tivoli, or
other monitoring tool(s). Polling intervals should be every five minutes.
Caution: If you are connecting Jive to other resources such as an LDAP server, SSO system, SharePoint,
and/or Netapp storage, we strongly recommend setting up monitoring on these external/shared resources.
Most importantly, if you have configured Jive to synchronize against an LDAP server, or if you have
configured Jive to authenticate against an SSO, we strongly recommend that you configure monitoring and
alerting on that external resource so that you can properly troubleshoot login issues. At Jive Software, we see
outages related to the LDAP server not being available in our hosted customer environments.
JMX Data Points
Node
Data Type
JMX Object
Name
JMX Attribute Name
Data Point
Jive web
JVM Heap Memory
application(s)
java.lang:type=Memory
HeapMemoryUsage
max
JVM Heap Memory
java.lang:type=Memory
HeapMemoryUsage
used
Voldemort Cache
Average Operation
Time
voldemort.store.stats.aggregate:type=aggregateaverageOperationTimeInMs
perf
milliseconds
| Administering the Platform | 41
Node
Cache
server
Activity
Engine
Data Type
JMX Object
Name
JMX Attribute Name
Data Point
Voldemort Cache
Average Operation
Time
voldemort.store.stats.aggregate:type=aggregateaverageOperationTimeInMs
perf
milliseconds
JVM Heap Memory
java.lang:type=Memory
HeapMemoryUsage
max
JVM Heap Memory
java.lang:type=Memory
HeapMemoryUsage
used
JVM Heap Memory
java.lang:type=Memory
HeapMemoryUsage
max
JVM Heap Memory
java.lang:type=Memory
HeapMemoryUsage
used
PostgreSQL Data Points
At Jive Software, we collect the PostegreSQL data points for the core application database and the Activity Engine
database. You may choose to also collect these data points for the Analytics database; we do not do this at Jive
Software.
Query Method
Type
Data Points
poll_postgres.py script
Connections
Total, Active, Idle
This script makes one
Locks
query to the database.
The query returns all of
the following data points
at once.
Total, Granted, Waiting, Exclusive,
Access Exclusive
Latencies
Connection latency, SELECT Query
latency
Tuple Rates
Returned, Fetched, Inserted, Updated,
Deleted
Fine-Tuning Performance
Through adjustments to caches, JVM settings, and more, you can make sure that the application is performing well.
It's almost certain that you'll want to adjust application settings from their defaults shortly after you're up and
running. In particular, you'll want to keep an eye on caching, but there are other things you can do to ensure that the
application is performing as well as possible. See the following for tuning suggestions.
Client-Side Resource Caching
The platform HTTPD server is pre-configured for optimal caching of static production content. Default configuration
values for content caching can be found on a Jive-managed server at /usr/local/jive/etc/httpd/conf.d/cache.conf. You
can edit this file to change default cache time or headers for specific scenarios (changing length of time static images
are cached, for example). Changes to this file will be preserved across upgrades to a file named “cache.conf.rpmnew”.
If this file is changed, be sure to check for new enhancements when upgrading.
Note: Certain resources in plugins and themes are cached for 28 days by default. These include the following
file types: .js, .css, .gif, .jpeg, .jpg, and .png. This means that clients won't see updated versions of those files
until their cache expires or is cleared. Of course, changing the resource's file name will also cause it to be
downloaded because it isn't yet cached.
| Administering the Platform | 42
Server-Side Page Caching
You can adjust server-side page caching for anonymous users when their having the very freshest content is less
of a concern. With server-side caching on, the server caches pages that are assembled dynamically from data and
resources. Retrieving a page from the cache can save the time needed to assemble a fresh page. However, if the data
that makes up the page has changed, the page in the cache won't be as fresh as a new one would be.
With server-side page caching disabled (and for registered users, whether or not caching is enabled), Jive sends its
default HTTP headers. With page caching enabled, in addition to the server-side page cache stored in memory, the
application also sets the HTTP header in the response to Cache-Control max-age=3600.
The value set for max-age is configurable as described below.
You can set these with system properties in the admin console.
Property
Description
Values
jive.pageCache.enabled
Enables server-side page caching.
false (default) to disable page
caching; true to enable it. When
enabled, only anonymous or guest
users will receive cached content.
jive.pageCache.maxage.seconds
Sets the age after which the server
Defaults to 60 seconds. This sets
will create a fresh page rather retrieve Cache-Control: max-age=60
the page from the cache.
in the HTTP headers for the page.
jive.pageCache.expiration.seconds
Sets the number of seconds after
which a page will be removed from
the cache.
jive.pageCache.maxEntries
Sets the maximum number of pages Defaults to 1000 entries.
that can be maintained in the cache.
Note that increasing this value might
require that you provide more system
resources for the application.
Defaults to 30 seconds
Note that turning on developer mode by setting the jive.devMode property to true will disable the maxAgeFilter
setting (effectively setting jive.maxAgeFilter.enable to false). The jive.devMode property is intended for situations
when you're developing themes or plugins, In those situations, caching can hinder you from seeing the results of your
development work.
Fastpath: Admin Console: System > Management > System Properties
Configuring External Static Resource Caching
If you're using a lightweight content delivery network (CDN), you can configure the community to tell clients to
retrieve static resources from your CDN server. This improves performance by reducing load on the Jive server. You
can make this setting in the admin console.
Fastpath: Admin Console: System > Settings > Resource Caching
This feature assumes that you've set up and configured your CDN software to retrieve static resources from the
application server when necessary. Here are the basic steps:
1. Set up your CDN, configuring it to be aware of your Jive server.
2. Configure the resource caching feature with the CDN base URL where static resources can be found when
requested.
3. At run time, when building pages for a client, Jive will rewrite static resource locations so that their URLs point to
your CDN server.
| Administering the Platform | 43
4. When completing the page request, the client will use the CDN URL to retrieve static resources.
5. If the CDN server has the resource, it will return it; if not, it will retrieve the resource from the Jive server, return
it to the client, and cache it for future requests.
To configure the feature, go to Admin Console: System > Settings > Resource Caching and select the Enable
external caching... check box. Enter the CDN URL where static resources can be retrieved by clients.
Adjusting the Java Virtual Machine (JVM) Settings
As with any Java-based web application, you can sometimes improve performance by assigning particular values
to the Java Virtual Machine options. You can edit the JVM minimum and maximum memory settings on a node,
as well as the JVM "PermGen" setting by editing the values for JVM_HEAP_MAX, JVM_HEAP_MIN, and
JVM_PERM_GEN variables. These values are expressed in MB. For example, to set the minimum and maximum
heap available on the node to 4GB, add or edit the following lines to the file where the JVM properties are stored:
export JVM_HEAP_MAX=4096
export JVM_HEAP_MIN=4096
The following table lists the default JVM values for each of the nodes. Note that your particular community may
need to decrease or increase these values depending on the size and traffic of your community. For sizing capacity
recommendations, be sure to read Deployment Sizing and Capacity Planning.
JVM Defaults and Recommendations
Node
File Location of Stored Values on
the Node
Jive Web
/usr/local/jive/
Application(s)applications/<instancename>/bin/instance
Default JVM Values
JVM_HEAP_MIN=<value
in MB>
JVM_HEAP_MAX=<value
in MB>
JVM_PERM_GEN=<value
in MB>
Additional Notes and
Recommendations
To ensure that the
appropriate resources
are available to the
running application,
we recommend setting
the JVM_HEAP_MIN
and JVM_HEAP_MAX
to the same value on
the web application
node(s). In a clustered
environment, these min
and max values should
be the same for all of the
web application nodes.
For larger communities,
that is, communities that
get more than 100,000
page views per day or that
contain a large amount of
content (more than 100,000
messages, documents, or
blog posts), you may need
to increase the JVM heap
min and max settings to be
both 4096 or both 6144.The
JVM_PERM_GEN should
| Administering the Platform | 44
Node
File Location of Stored Values on
the Node
Default JVM Values
Additional Notes and
Recommendations
remain unchanged from the
default value.
Additional /usr/local/jive/
Cluster
applications/<instanceNodes
name>/bin/instance
(if your
configuration
includes
these
optional
nodes)
These values should match
JVM_HEAP_MIN=<should those of the primary web
match the web app
app nodes.
node's values>
JVM_HEAP_MAX=<should
match the web app
node's values>
JVM_PERM_GEN=<should
match the web app
node's values>
Activity
Engine
/usr/local/jive/services/
eae-service/bin/instance
JVM_HEAP_MIN=<value
in MB>
By default, the Activity
Engine does not use a
JVM_PERM_GEN value.
JVM_HEAP_MAX=<value
in MB>
Cache
/usr/local/jive/etc/conf/
Server(s)
cache.conf
(if your
configuration
includes
these
optional
nodes)
Document /usr/local/jive/services/
Conversion docconverter/bin/instance
(if you
have this
optional
module)
By default, the cache
server(s) does not use a
JVM_PERM_GEN value.
To change the JVM heap
settings on the cache
server(s), see Adjusting
JVM Settings on the Cache
Server.
JVM_HEAP_MIN=<value
in MB>
JVM_HEAP_MAX=<value
in MB>
We recommend not
changing the default
settings. They have
consistently performed well
in all pre-release quality,
stress, and performance
tests.
JVM_PERM_GEN=<value
in MB>
Adjusting the Java Virtual Machine (JVM) Settings on a Cache Server
To adjust the JVM settings on the cache server(s), use the correct syntax for your version.
For 5.0.0:
For 5.0.1, 5.0.2, 5.0.3:
JVM_HEAP=<value in MB>
JVM_HEAP=<value in MB>
JIVE_MAX_HEAP=WORKAROUND
| Administering the Platform | 45
For 5.0.4 and higher:
If you are upgrading in place from 4.5.x:
JVM_HEAP_MAX=<value in MB>
If you are not upgrading in place from 4.5.x:
JVM_HEAP=<value in MB>
Performance Tuning Tips
Here are a few ways to get the application running the most efficiently.
•
•
•
•
Consider setting the default for threaded discussions to "flat." People will still be able to set thread mode their own
views to "threaded," but setting the default will ensure the "flat" mode for new users.
If you're using a load balancer, make sure it's configured for session affinity/sticky sessions.
On Oracle 11g, use the prepared statement cache to reduce database overhead.
When using Oracle as an RDBMS, the OCI driver should be used as opposed to the “thin” JDBC Type4 driver.
Using the OCI driver will require installation of Oracle native binaries compatible with the operating system
hosting the Jive installation.
Search Index Rebuilding
In rare cases, particularly after a version upgrade and depending on your configuration, you may experience very long
search index rebuild times. In this case, you may wish to adjust the search index rebuild system properties to increase
the limit on the amount of resources used, potentially improving performance.
Fastpath: Admin Console: System > Management > System Properties
Search index performance can vary greatly depending on the size and number of binary documents and attachments,
as well as user activity, in your community. By default, the search index parameters are set to use as few memory
and CPU resources as possible during a rebuild. If you experience extremely long search index rebuild times (for
example, because your community has created a large amount of content) and you have additional CPU and memory
resources to spare, try adjusting the search index rebuild system properties listed in the following table.
Be aware that increasing the number of writer threads will stress the Solr search node, and increasing the number
of procsPerType (with appropriate increases in the number of gatherer threads) will add more stress to the web
application node(s). However, all of these parameters should generate an appropriate amount of back-pressure on one
another; so, if there's no user traffic, you are very unlikely to crash the web application node(s) by setting the search
rebuild parameters too high.
System
Property
DefaultDescription
Setting
search.rebuild.threads
1
This is the maximum number of
concurrently executing writes
against the search index.
Recommendation
Try increasing this value until the Solr node is I/O-bound.
search.rebuild.gather.threads
5
This is the maximum number of
Try increasing the value to 2 times the number of CPU
concurrently executing gather
cores on your web application node(s).
operations. A gather operation
is the step that loads the jive
object from the database and
populates an IndexInfo object that
is prepared to be written out to the
search index.
search.rebuild.gather.procsPerType
1
This is the number of gather
operations to create per content
Try setting this value to the quotient of the number of
gatherer threads and the number of content types in active
| Administering the Platform | 46
System
Property
DefaultDescription
Setting
Recommendation
type. There is a strong connection
between this property and the
number of gathering threads
(search.rebuild.gather.threads). If
you are gathering only 3 content
types (for example, discussions,
blogs, and documents), increasing
search.rebuild.gather.procsPerType
to 2 will provide an increase in
rebuild speed for only 2 of the 3
content types.
use. For example, if your community is primarily forumsbased and you therefore have only 1 content type, set
procsPerType to the number of gatherer threads. If your
community actively uses 3 content types, for example,
discussions, documents, and ideas with similar frequency
(within one order of magnitude), and has only a handful
of polls and videos, set procsPerType to the number of
gatherer threads divided by 3 (ignoring the polls and
videos content types because they are used infrequently as
compared to the active content types).
Using a Content Distribution Tool with Jive
Many of Jive Software's customers rely on a third-party content distribution and/or content delivery network (CDN)
tool to help their Jive pages load faster for globally-dispersed users. In this section, we describe some best practices
for using Jive with these tools.
Note: The application can be configured to work with most CDN tools. While there are a number of
hardware appliances that customers use inside their firewall, Jive has found that the majority of on-premise
customers choose to deploy behind devices sold by F5.
Recommended Settings for F5
In most cases, your Jive configuration should rely on the default settings in F5. However, there are a few settings that
Jive Software’s hosting engineers commonly customize to optimize hosted Jive deployments.
Generally speaking, Jive Software recommends using the default settings in F5 because F5 is already optimized for
them and customizations you create may require more processing, and thus, more load.
The following tables list the settings that Jive Software’s hosting engineers typically change in F5. These are general
guidelines. Your needs may be different. Contact your Jive Software representative with specific questions.
Table 1: Node Configuration
Setting
Description
ICMP Health Monitor
A simple ICMP request (PING) to the node to confirm it is online and
operating at its most basic level.
Table 2: Pool Configuration
Setting
Description
TCP Health Monitor
This is necessary because HTTP does not always show it is down when the
Jive application goes into a maintenance mode. At Jive Software, we depend on
Web Injections via a separate monitoring service to determine whether a node
in a pool is operational or not. Therefore, if a TCP connection fails to the port
that is specified by the VIP, the node is considered down and removed from the
pool. Note that a node will not be considered down if the Jive application dies
but the service is still running. This is why we use Web Injections to do more
appropriate application level uptime validation. For more about monitoring
Jive, be sure to read Monitoring Your Jive Environment.
Load balancing method: Least
Connections (node).
This will cause the Jive application to load balance based on the number of
connections to the node, regardless of whether the connections are related to the
pool traffic. Therefore, load is balanced over all between individual nodes.
| Administering the Platform | 47
Table 3: HTTP VIP Configuration
Setting
Description
OneConnect /32 profile
This profile is used to accommodate the CDN fronting the Jive application
access. This setting allows F5 to properly handle multiple HTTP requests within
the same TCP connection, as you would see when using a CDN. For more
details, read F5’s documentation here.
HTTP Profile (this applies only
if you are using F5 VIP’s with
SNAT).
This is a customized profile based off the parent HTTP profile to insert the true
client source IP using either Request Header Insert or Insert X-Forwarded-For.
This is for HTTP logging because F5 acts as a reverse proxy to the Jive web
application nodes.
Set the SNAT Pool to Auto Map.
F5 acts as a reverse proxy to the Jive web application nodes; the Jive
application needs the traffic response from the web application nodes to
respond back through F5. This setting isn’t required, but we recommend it as a
best practice for configuring the F5 in a one-armed mode.
Set the default persistence profile
to cookie
This will maintain session persistence based on an inserted cookie.
Keep iRules as simple as possible. At Jive Software, our hosting engineers try to keep iRule use to a minimum
because they are evaluated each time traffic passes the VIP to which it is
attached. Because this adds processing load, we recommend keeping it simple
and adding as few iRules as possible.
Use an iRule or HTTP Class
Profile for redirect from HTTP to
HTTPS.
To keep processing to a minimum, we recommend using the configuration
options built into F5 rather than iRules to accomplish HTTP to HTTPS
redirects. However, be aware that using an HTTP Class Profile for redirects
uses a 302 redirect (Temporary), not a 301 redirect (Permanent). To understand
why this may cause problems with your configuration, read more here. If this is
acceptable for you, then you can use an HTTP Class Profile to accomplish your
redirect; otherwise, you'll need to use an iRule. Here is an example of each:
•
iRule:
when HTTP_REQUEST {
HTTP::respond 301 Location "https://[HTTP::host]
[HTTP::uri]"
}
•
HTTP Class Profile: use the Send To option and select Redirect To. Then,
in the Redirect to Location, set it to https://[HTTP::host][HTTP::uri]
Table 4: HTTPS VIP Configuration
Setting
Description
Set everything the same as above
in HTTP VIP Configuration,
except the following:
Use the default HTTP Profile (this The HTTP profile cannot be used to insert the true client source IP into the
applies only if you are using F5
header of an HTTPS connection. This must be done by using an iRule for
VIP’s with SNAT).
HTTPS traffic. Here is a simple example:
when HTTP_REQUEST { HTTP::header insert
JiveClientIP [IP::remote_addr] }
| Administering the Platform | 48
Setting
Description
Set the Client SSL Profile to cover We recommend leaving everything else as the default parent profile of clientssl.
your SSL certificate, key, and
You may want to consider removing the renegotiation option from the parent
chain.
clientssl profile for security reasons. Caution: there is a potential DoS risk here.
To learn more about it, be sure to read https://community.qualys.com/blogs/
securitylabs/2011/10/31/tls-renegotiation-and-denial-of-service-attacks).
Clustering in Jive
This topic provides an overview of the system that supports clustered installations of Jive.
Note: For version 4.5, the caching and clustering systems were redesigned from the ground up. For more on
the differences from prior to version 4.5, be sure to see Clustering and In-Memory Caching: Rationale and
Design for Changing Models.
While they're different services, the clustering and caching systems interoperate. In fact, an application cluster
requires the presence of a separate cache server for caching data for use by all application server nodes in the cluster.
For information on installing the application on a cluster, see Setting Up a Cluster.
Parts of the Clustering System
•
•
•
•
•
Application Servers In the middle-tier, multiple application servers are set up and the clustering feature is
enabled. Caches between the application instances are automatically synchronized. If a particular application
server fails, the load-balancer detects this and removes the server from the cluster.
Senior Member The senior member is the node that starts first. It will run the background tasks that are run on
only one cluster node. On a cluster where document conversion is enabled, the senior member starts a JMS broker.
It also grants distributed locks when needed. If the senior member is removed from the cluster, another node will
become the senior member. If the former senior member rejoins, it won't automatically become master again (that
would disrupt the execution of background tasks and document conversion-related messaging).
Cache Server On a separate machine from application servers is a cache server that is available to all application
server nodes in the cluster (in fact, you can't create a cluster without declaring the address of a cache server).
Database Server All instances in a cluster share the same database.
Load Balancer Between users and the application servers is a load-balancing device. The device may be
hardware- or software-based. Every user has a session (represented by a unique cookie value) that allows stateful
data to be maintained while they are using the application. Each session is created on a particular application
server. The load-balancer must be "session-aware," meaning that it inspects the cookie value and always sends a
given user's requests to the same application server during a given session. Without session-aware load balancing,
the load-balancer could send requests to any application server in the cluster, scrambling results for a given user.
The follow illustration shows the typical cluster configuration. Note that the database server and cache server are
separate nodes, but not part of the cluster.
| Administering the Platform | 49
The existence of a cluster is defined in the database, which stores the TCP endpoint for each node in the cluster. A
node knows it's supposed to be in a cluster because the database it is using shows that clustering is enabled. Nodes
in a cluster use the application database to register their presence and locate other nodes. Here's how that works at
startup:
1. When an application server machine starts up, it checks the database to discover the TCP endpoint (IP address and
port) it should bind to.
2. If the node can't find its TCP endpoint in the database (because this is the first time is has started and tried to join
a cluster, for example), it will look for the first non-loopback local address it can use. It tries to bind to a default
port (7800). If it fails, it will scan up to port 7850 until it finds a port it can bind to. If this fails, the node doesn't
join the cluster.
3. Having established an endpoint for itself, the node notes the other node addresses it found in the database.
4. The node joins the cluster.
Clustering Best Practices
Here are a few best practice suggestions for clustered installations.
•
•
Ensure that the number of nodes in your cluster is greater than what you'll need to handle the load you're getting.
For example, if you're at capacity with three nodes, then the cluster will fail when one of those nodes goes down.
Provision excess capacity so that your deployment can tolerate a node's failure.
If you have document conversion enabled, and one of the machines is faster than the others, start that one first.
As the senior member of the cluster, it will start a JMS broker for which all the other nodes are clients. (Keep in
mind that if a new senior member must be elected later, all nodes are pointed at a new broker on the new senior
member.)
Clustering FAQ
Do all cluster members need to be on the same local network? Yes. It's better for performance.
Is it possible to have more than one cluster per physical network? Yes, this works without additional
configuration.
| Administering the Platform | 50
How do config files work in a cluster? All configuration data (except bootstrap information such as database
connection information) is stored in the database. Changing configuration settings on one cluster member will
automatically update them on all other cluster members.
Can I set a cluster node's TCP endpoint to a particular value? Yes. If you have an address and port you want to
force a node to bind to, you can do that by setting those values in the admin console. If you do that, the node will try
that address and port only; it won't scan for an address and port if the one you specify fails. For more information, see
Configuring a Cluster Node.
How will I know if a cluster node has failed or can't be found by the cluster? The Cluster page in the admin
console displays a list of nodes in the cluster. See Configuring a Cluster Node for more information.
Managing an Application Cluster
The clustering system is designed to make it easy to add and remove cluster nodes. By virtue of connecting to an
application database that other cluster nodes are using, a node will automatically discover and join the cluster on
startup. You can remove a node using the admin console.
Be sure to see the clustering overview for a high-level view of how clustering works.
Enabling and Disabling a Cluster
You can enable or disable clustering in the admin console. See Configuring a Cluster Node for more information.
Adding a Cluster Node
When you add a new node to the cluster, you must restart every node in the cluster to ensure that the new node is seen
by the others. To avoid competition for senior node status, make sure you wait a minute or more between restarting
each node and ensure each one is properly initialized.
You might also be interested in Setting Up a Cluster, which describes the process for installing or upgrading the
application on an entire cluster.
Fastpath: Admin Console: System > Settings > Cluster
1. Install the application on the new node as described in Installing the Linux Package.
2. Finish setting up the new node, restart it, and let it get up and running.
By default, the node will scan for the TCP endpoint and register itself in the database. You can also specify a
particular endpoint in the admin console as described in Configuring a Cluster Node.
3. Restart all nodes in the cluster so that the other nodes can become aware of the new node.
Removing a Cluster Node
When you want to be sure that a node's registration is removed from the database, you can remove a node from a
cluster by using the admin console.
Fastpath: Admin Console: System > Settings > Cluster
1. Ensure that the node you want to remove is shut down.
2. In the admin console for any of the nodes in the cluster, on the Cluster page, locate the address of the node you
want to remove.
3. Next to the node's address, select the Remove check box.
4. Click Save to save settings and remove the address from the database.
Settings will be automatically replicated across the cluster.
| Administering the Platform | 51
In-Memory Caching
The in-memory caching system is designed to increase application performance by holding frequently-requested data
in memory, reducing the need for database queries to get that data.
Note: For version 4.5, the caching and clustering systems were redesigned from the ground up. For more on
the differences from prior to version 4.5, be sure to see Clustering and In-Memory Caching: Rationale and
Design for Changing Models.
The caching system is optimized for use in a clustered installation, where you set up and configure a separate external
cache server. In a single-machine installation, the application will use a local cache in the application's server's
process, rather than a cache server.
Note: Your license must support clustering in order for you to use an external cache server.
Parts of the In-Memory Caching System
In a clustered installation, caching system components interoperate with the clustering system to provide fast response
to client requests while also ensuring that cached data is available to all nodes in the cluster.
Note: For more on setting up caching in a clustered installation, see Setting Up a Cache Server.
Application server. The application manages the relationship between user requests, the near cache, the cache server,
and the database.
Near cache. Each application server has its own near cache for the data most recently requested from that cluster
node. The near cache is the first place the application looks, followed by the cache server, then the database.
Cache server. The cache server is installed on a machine separate from application server nodes in the cluster. It's
available to all nodes in the cluster (in fact, you can't create a cluster without declaring the address of a cache server).
Local cache. The local cache exists mainly for single-machine installations, where a cache server might not be
present. Like the near cache, it lives with the application server. The local cache should only be used for singlemachine installations or for data that should not be available to other nodes in a cluster. An application server's local
cache does not participate in synchronization across the cluster.
Clustering system. The clustering system reports near cache changes across the application server nodes. As a result,
although data is not fully replicated across nodes, all nodes are aware when the content of their near caches must be
updated from the cache server or the database.
How In-Memory Caching Works
For typical content retrievals, data is returned from the near cache (if the data has been requested recently from the
current application server node), from the cache server (if the data has been recently requested from another node in
the cluster), or from the database (if the data is not in a cache).
Data retrieved from the database is placed into a cache so that subsequent retrievals will be faster.
Here's an example of how changes are handled:
1. Client makes a change, such as an update to a user profile. Their change is made through node A of the cluster,
probably via a load balancer.
2. The node A application server writes the change to the application database.
3. The node A app server puts the newly changed data into its near cache for fast retrieval later.
4. The node A app server puts the newly changed data to the cache server, where it will be found by other nodes in
the cluster.
| Administering the Platform | 52
5. Node A tells the clustering system that the contents of its near cache have changed, passing along a list of the
changed cache items. The clustering system collects change reports and regularly sends them in a batch to other
nodes in the cluster. Near caches on the other nodes drop any entries corresponding to those in the change list.
6. When the node B app server receives a request for the data that was changed, and which it has removed from its
near cache, it looks to the cache server.
7. Node B caches the fresh data in its own near cache.
Cache Server Deployment Design
In a clustered configuration, the cache server should be installed on a machine separate from the clustered application
server nodes. That way, the application server process is not contending for CPU cycles with the cache server process.
It is possible to have the application server run with less memory than in a single-machine deployment design. Also
note that it is best if the cache servers and the application servers are located on the same network switch. This will
help reduce latency between the application servers and the cache servers.
Note: For specifics about hardware configuration, see the System Requirements.
Choosing the Number of Cache Server Machines
A single dedicated cache server with four cores can easily handle the cache requests from up to six application
server nodes running under full load. All cache server processes are monitored by a daemon process which will
automatically restart the cache server if the JVM fails completely.
In a cluster, the application will continue to run even if all cache servers fail. However, performance will degrade
significantly because requests previously handled via the cache will be transferred to the database, increasing its load
significantly.
| Administering the Platform | 53
Adjusting Cache-Related Memory
Adjusting Near Cache Memory
The near cache, which runs on each application server node, starts evicting cached items to free up memory once
the heap reaches 75 percent of the maximum allowed size. When you factor in application overhead and free space
requirements to allow for efficient garbage collection, a 2GB heap means that the typical amount of memory used for
caching will be no greater than about 1GB.
For increased performance (since items cached in the near cache are significantly faster to retrieve than items stored
remotely on the cache server) larger sites should increase the amount of memory allocated to the application server
process. To see if this is the case, you can watch the GC logs (or use a tool such as JConsole or VisualVM after
enabling JMX), noting if the amount of memory being used never goes below about 70 percent even after garbage
collection occurs.
Adjusting Cache Server Memory
The cache server process acts similarly to the near cache. However, it starts eviction once the heap reaches 80 percent
of the maximum amount. On installations with large amounts of content, the default 1GB allocated to the cache server
process may not be enough and should be increased.
To adjust the amount of memory the cache server process will use, edit the /etc/jive/conf/cache.conf file, uncomment
the following line, and set it to a new value:
#JVM_HEAP='1024'
Make sure to set the min and the max to the same value -- otherwise, evictions may occur prematurely. If you need
additional cache server memory, recommended values are 2048 (2GB) or 4096 (4GB). You'll need to restart the cache
server for this change to take effect. See Managing Cache Servers for more information.
Managing In-Memory Cache Servers
This topic describes how you can manage the cache server nodes in a cluster. This includes starting and stopping
servers, adding and removing nodes, and moving a node.
For information about installing cache servers in a cluster, see Setting Up a Cache Server.
Synchronizing Server Clocks
Cache servers determine the consistency of cached data between cache servers partially based on the timestamp
used when storing and retrieving the data. As a result, all the clocks on all machines (both cache server machines
and app server nodes) must be synchronized. It is common to do this through the use of an NTP daemon on each
server synchronized to a common time source. You'll find a good starting point for understanding NTP at http://
www.ntp.org/ntpfaq/. Note that clock synchronization becomes even more important when running within a
virtualized environment; some additional steps may be required for proper clock synchronization as outlined in the
vendor's documentation.
Also, if you're running in a virtualized environment, you must have VMware tools installed in order to counteract
clock drift.
Starting and Stopping Cache Servers
You can start and stop cache servers using the commands described below. Note that all cached data on that machine
will be lost when its cache server is shut down.
Start a cache server using the following command:
On Linux
/etc/init.d/jive-cache start
| Administering the Platform | 54
To stop a cache server use the following command:
On Linux
/etc/init.d/jive-cache stop
Adding a Cache Server Machine
Adding a cache server to a cluster that has existing cache machines requires additional steps beyond a fresh
installation. In particular, you'll need to shut down the entire cluster (both application and cache servers) before you
add a new cache server.
1. Before you shut down the cluster, add the new cache server machine. In the admin console, go to System >
Settings > Caches. In the Cache Servers field, add the new cache server machine, then save the settings.
2. Shut down every node in the cluster.
3. Install the new cache server as described in Managing Cache Servers.
4. On each of the existing cache machines, edit the file at /etc/jive/conf/cache.conf and adjust the
CACHE_ADDRESSES line to be identical to the list you entered on the new cache machine. You can use the IP
address or domain name, but be consistent with the format you use. For more information, see Setting Up a Cache
Server.
5. Start up all cache servers before starting the application servers.
Removing a Cache Server Machine
Removing a cache server from an existing cluster is very similar to adding one.
1. Before you shut down the cluster, remove the cache server machine from the list. In the admin console, go to
System > Settings > Caches. From the Cache Servers field, remove the cache server machine, then save the
settings.
2. Shut down every node in the cluster.
3. On each of the existing cache machines, edit the file at /etc/jive/conf/cache.conf and adjust the
CACHE_ADDRESSES line to remove the cache server machine.
4. Start up all cache servers before starting the application servers.
Moving a Cache Server to Another Machine
Moving a cache server from an existing cluster is very similar to adding a machine.
1. Before you shut down the cluster, update the list of cache servers. In the admin console, go to System > Settings
> Caches. In the Cache Servers field, change the address for the cache server machine you're going to move, then
save the settings.
2. Shut down every node in the cluster.
3. On each of the existing cache server machines, edit the file at /etc/jive/conf/cache.conf and adjust the
CACHE_ADDRESSES line to update the list so that it reflects the new list of machines, including the one you're
moving.
4. Start up all cache servers before starting the application servers.
Configuring In-Memory Caches
In-memory caching reduces the number of trips the application makes to its database by holding often-requested
data in memory. When you configure cache servers, you give each server the list of all cache server machines. For
example, you might edit the list of cache server machines when you're adding or removing servers.
For information on adding and removing cache servers, see Managing Cache Servers. For information on installing
cache servers, see Setting Up a Cache Server.
The Caches page in the admin console lists the application's caches and provides information on how well they're
being used. This information is for use in troubleshooting if you need to call Jive support.
| Administering the Platform | 55
Fastpath: Admin Console: System > Settings > Caches
Note: In versions prior to 4.5, you might have needed to adjust cache sizes in order to improve performance.
As of version 4.5, cache sizes are adjusted by the application based on JVM heap usage. Also, the short-terms
query cache setting has been removed because it is no longer needed in the caching system.
Registering Cache Servers
You register cache server machines by entering their IP or domain name in the Cache Servers box of the Caches
admin console page. If you're running multiple cache server machines, they must all be listed in the Cache Servers
box. The same list must be configured on every node in the cluster.
Caution: If you're setting up more than one cache server machine, you must use three or more. The
CACHE_ADDRESSES value should list them in a comma-separated list. Using only two cache servers is not
supported and can cause data loss.
For more information about adding, removing, and moving cache servers, see Managing Cache Servers.
Getting Cache Performance Information
When requested by the support team, you can provide information about caches using the Cache Performance
Summary table on the Caches page in the admin console. There, you'll find a list the individual kinds of data cached.
Many represent content, such as blog posts and documents. Others represent other data that can be performanceexpensive to retrieve from the database.
For each cache, you'll find the following information:
Column Name
Description
Cache Name
You can click the cache name to view advanced statistics about the cache. You might use
these statistics when working with the support team to resolve cache-related issues. General
information about the advanced statistics is provided below.
Objects
Generally speaking, each object in the cache represents a different instance of the item. For
example, if the Blog cache has 22 objects in it, it means that 22 of the community's blogs are
represented there.
Hits / Misses
A cache hit is recorded when a query to the cache for the item actually finds it in the cache;
a cache miss is when the item isn't found in the cache and the query much go to the database
instead. As you might imagine, a higher ratio of hits to misses is more desirable because it
means that requests are finding success in the cache, making performance from the user's
perspective better.
Effectiveness
The effectiveness number -- a percentage -- is a good single indicator of how well a
particular cache is serving your application. When a cache is being cleared often (as might
happen if memory constraints are being reached), the ratio of cache hits to misses will be
lower.
Clear Cache Check
Box
When you're asked to clear a cache, select its check box, then click the Clear Selected
button at the bottom of the cache list table.
Clustering and In-Memory Caching: Rationale and Design for Changing Models
As of version 4.5, Jive includes a new framework for in-memory caching. The new framework replaces the inmemory cache provided in releases prior to 4.5.0 by Oracle Coherence. Coherence is no longer in the application.
The new caching subsystem operates as an external service with simpler administration and increased stability and
flexibility compared to previous releases.
As part of this change, the clustering and caching features were separated into two distinct frameworks. Previously,
Coherence had handled both features. This topic describes changes to both features.
| Administering the Platform | 56
The changes described here involve design and behavior concepts that are described more deeply in In-Memory
Caching Overview.
FAQ: In-Memory Caching and Clustering Changes
This FAQ answers questions about the changes in both design and practical terms.
What happened?
In versions prior to 4.5, the application's in-memory caching and clustering features were based on Oracle Coherence.
As of version 4.5, the two features no longer share an implementation framework; both were reimplemented from
scratch and Coherence was removed. Generally speaking, these changes were made to help ensure that the application
continues to perform well under heavy load.
What was wrong with the previous model?
While the previous model for clustering and caching worked reasonably well for many releases, the application had
begun to outgrow the model in many of its deployments. Here are the main reasons:
•
•
•
In-process caches compete with with the application for memory. Caches by their very nature use a lot of
memory. This puts a strain on Java's garbage collector, which has to balance normal application server traffic with
cache needs. The risks and issues associated with in-process caching had begun to outweigh its benefits.
Separately sized caches created ineffectual administrative overhead. Having every cache sized independently
created administrative overhead that could be avoided. Administrators had to experiment with cache sizes while
keeping overall cache usage within JVM limits. This pattern of balancing cache sizes was tedious and unreliable.
A cache system distributed among nodes (rather than on a separate machine) can create bottlenecks if a
node fails. One node's failure -- due to memory issues, for example -- could create a domino effect that resulted in
a cluster-wide failure.
What changed?
See Changes in Clustered Installations below for a detailed list of design characteristics and changes.
Will clustering and caching be changed in previous versions?
There are no plans to change the clustering and caching features in versions prior to 4.5.
How are caching and clustering related now?t
The features aren't two aspects of the same framework (as they were prior to version 4.5) -- they're separate now,
but interoperate. Although the parts of the caching system are not aware of their presence in a cluster, the clustering
system is aware of the caches. For example, when changes occur to data in one node's near cache, the clustering
system is responsible for ensuring that the other nodes are aware of the change.
Changes in Single-Maching Installations
•
•
No cache server required; local cache is used. With a single-machine installation, you don't need to set up a
separate cache server. Instead, the application will use the built-in local cache.
Near cache is not used. In a single-machine installation, the local cache is used because there is no need to
synchronize changes across nodes.
Changes in Clustered Installations
The items listed here describe aspects of the new caching model in a clustered context. Many of these characteristics
directly differ from the previous model.
General Design Changes
•
Near cache changes propagated. In the new model, when a change occurs on one node -- such as a user change
that results in a "put" to the near cache -- the change is propagated to near caches on other nodes. The near cache
that received the change notifies the cluster management system that a change has occurred. The clustering
system sends batched changes to other nodes in the cluster. Once aware that a change to data they care about has
| Administering the Platform | 57
•
•
•
•
•
•
occurred, other nodes know to go to the cache server for items in the changed cache rather than using their near
cache.
Near caches synchronized every 1/2 second. The near cache instances are synchronized across the cluster every
1/2 second. In other words, an update from node A might not be seen on node B for up to 0.5 seconds.
Near cache replaces node affinity for immediacy. In the previous model, cache entries that were being often
used by a particular node could be copied to that node. In the new model, the cache system relies on a larger near
cache for similar functionality.
Cache operations are transaction-aware. Cache operations inside a transaction will only be executed once the
transaction is complete. If the transaction rolls back, any cache updates or deletes are dropped.
Eventual cluster node consistency replaces atomic cache operations. In the new model, the goal is eventual
consistency. In the short-term cache implementation, updates can be delayed by half a second. In the previous
model, cache operations were atomic, meaning they were consistent across the cluster.
Distributed cache with larger near cache. The new model does not support optimistic replication, in which
all data is replicated on all nodes. Instead, it supports distributed caching with a remote cache server and a much
larger near cache than the previous model.
Values deserialized to near cache. In the new model, cache values are deserialized when returned from a remote
cache server and stored in their deserialized form in the near cache. Local caches involve no object serialization.
Changes That Can Affect Requirements and Installation
•
•
•
Separate cache server instead of caches distributed among nodes. Perhaps the most important design
difference between the previous caching model and the new one is that in previous model, each app node in a
cluster had its own cache and those caches were replicated across nodes. In the new model, application servers
on their nodes are clients of the cache server, which is on its own node. This means that the failure of a node is
less likely to cause a cluster-wide failure; cache requests go to the cache server rather than to another node in the
cluster.
Cache server installed through the RPM. In the new model, the cache server feature is distributed in the RPM
and comes with its own set of scripts. You set up a cache server in a manner similar to setting up an application
server. See Managing Cache Servers for more.
More memory needed. In the new model, overall application memory requirements across cluster nodes is higher
because it now includes the additional JVMs for the separate cache server. For more information on what's needed
for a cache server, see the System Requirements.
Changes That Can Affect Configuration
•
•
•
Multicasting replaced with manual configuration for clusters. In the previous clustering model, nodes
communicated via User Datagram Protocol (UDP). This protocol enabled multicasting, through which potential
cluster members could discover a cluster. In the new model, which uses TCP instead of UDP for communication,
cache servers must be manually configured to know about the other cache servers. You also configure the
application with the addresses of the cache servers as part of setup. For more on setting up servers, see Managing
Cache Servers and Managing Cache Servers, as well as Managing an Application Cluster.
Cache sizes are no longer individually set. Cache evictions in the previous model were determined on a
per cache basis. In the new system you do not size individual caches; rather, the server handles evictions
automatically based on heap usage.
Creating a cluster now requires specifying a cache server. In the previous model, caches were spread among
all nodes in the cluster. In a clustered configuration with the new model, the caching service now requires a
separate cache server (it's not possible to configure the application on a cluster without specifying a cache server).
Changes That Can Affect Performance
•
Timeout for long-running cache operations. In the previous model, each application node had a part of the
cache and each of those cache parts ran in its application server's JVM. This presented a problem in a situation
where one cluster node failed, leaving and joining the cluster repeatedly. When that happened, it created a
problem for both the application and Coherence. In the application, remote nodes would have to wait for
Coherence to respond (or time out), while trying to connect to the failing node.
| Administering the Platform | 58
•
•
In the new model, long-running cache operations such as this will time out and return null after 500 ms. In this
way an unresponsive cache node won't cause the application to hang. A cluster node's failure affects only the load
balancer and clustering technology; if the failing node is the current cluster master, the application will elect a new
master node.
Memory pressure, rather than time, determines cache item eviction. In the previous model, near caches were
much smaller and evicted cache items by timing them out (using with short timeout limits). In the new model, the
near cache in each application server's process evicts cache items when heap usage goes over 75 percent.
Multiple cache servers preserve cached data. If you're running more than two cache servers, cached data is now
stored on more than one cache server. This means that if a node goes down, cache data won't be lost. (Note that
this doesn't mean that the data is automatically replicated to another node to maintain the replication factor of the
data.)
Caution: If you're setting up more than one cache server machine, you must use three or more. The
CACHE_ADDRESSES value should list them in a comma-separated list. Using only two cache servers is
not supported and can cause data loss.
Troubleshooting Caching and Clustering
This topic lists caching- or clustering-related problems that can arise, as well as tools and best practices.
Log Files Related to Caching
If a cache server machine name or IP address is invalid, you'll get verbose messages on the command line. You'll also
get the messages in log files.
•
•
•
•
cache.log -- Output from the cache processes, showing start flags, restarts, and general errors.
cache-gc.log -- Output from garbage collection of the cache process.
cache-service.log -- Output from the cache service watchdog daemon, which restarts the cache service as needed
and logs interruptions in service.
cache.out -- Cache startup messages.
Use the appsupport Tool to Collect Cache Logs and Configuration
The appsupport tool, which gathers system information for communicating with Jive support, also collects caching
information (unless you specify not to). For more information, see appsupport Command.
Configure Address of Node Previously Set with tangosol.coherence.localhost
If, prior to version 4.5, you set the VM property -D tangosol.coherence.localhost on your application
instance, you'll need to enter the IP address (not name) of that node in the cluster. To do this, enter the node's IP
address in the Local Cluster Address in the Cluster admin console page. For more information, see Setting Up a
Cluster.
Cache Server Configuration Issue
When you are configuring a cache server with its address, you may need to use its IP address or its domain name,
depending on the Jive version. For more information, see Setting Up a Cache Server.
Misconfiguration Through Mismatched Cache Address Lists
If you have multiple cache servers, the configuration list of cache addresses for each must be the same. A mismatched
configuration will show up in the cache.log file. For example, if two servers have the same list, but a third one
doesn't, the log will include messages indicating that the third server has one server but not another, or that a key is
expected to be on one server, but is on another instead.
To fix the problem, ensure that each cache server is configured with identical cache address lists. You'll find the lists
in the file at etc/jive/conf/cache.conf; the cache_addresses line has the value that should be identical. Shut down the
cluster, open /etc/jive/conf/cache.conf, and edit the list of cache addresses so that the list is identical on all servers.
| Administering the Platform | 59
For more information, see Managing In-Memory Cache Servers.
Caution: If you're setting up more than one cache server machine, you must use three or more. The
CACHE_ADDRESSES value should list them in a comma-separated list. Using only two cache servers is not
supported and can cause data loss.
Cache Server Banned Under Heavy Load
Under extreme load, an application server node may be so overwhelmed that it may ban a remote cache server for a
small period of time because responses from the cache server are taking too long. If this occurs, you'll see it in the
application log as entries related to the ThresholdFailureDetector.
This is usually a transient failure. However, if this continues, take steps to reduce the load on the application server
to reasonable levels by adding more nodes to the cluster. You might also see this in some situations where a single
under-provisioned cache server (for example, a cache server allocated just a single CPU core) is being overwhelmed
by caching requests. To remedy this, ensure that the cache server has an adequate number of CPU cores. The
minimum is two, but four are recommended for large sites.
Banned Node Can Result in Near Cache Mismatches
While the failure of a node won't typically cause caching to fail across the cluster (cache data lives in a separate cache
server), the banning of an unresponsive node can adversely affect near caches. This will show up as a mismatch
visible in the application user interface.
An unresponsive node will be removed from the cluster to help ensure that it doesn't disrupt the rest of the application
(other nodes will ignore it until it's reinstated). Generally, this situation will resolve itself, with the intermediate
downside of an increase in database access.
If this happens, recent content lists can become mismatched between nodes in the cluster. That's because near cache
changes, which represent the most recent changes, are batched and communicated across the cluster. If the cluster
relationship is broken, communication will fail between the banned node and other nodes.
After First Startup, Node Unable to Leave Then Rejoin Cluster
After the first run of a cluster -- the first time you start up all of the nodes -- nodes that are banned (due to being
unresponsive, for example) might appear not to rejoin the cluster when they become available. That's because when
each node registers itself in the database, it also retrieves the list of other nodes from the database. If one of the earlier
nodes is the cluster coordinator -- responsible for merging a banned cluster node back into the cluster -- it will be
unaware of a problem if the last started node becomes unreachable.
To avoid this problem, after you start every node for the first time, bounce the entire cluster. That way, each will be
able to read node information about all of the others.
For example, imagine you start nodes A, B, and C in succession for the first time. The database contained no entries
for them until you started them. Each enters its address in the database. Node A starts, registering itself. Node B
starts, seeing A in the database. Node C starts, seeing A and B. However, because node C wasn't in the database when
A and B started, they don't know to check on node C -- if it becomes unreachable, the won't know and won't inform
the cluster coordinator. (Note that the coordinator might have changed since startup).
If a node leaves the cluster, the coordinator needs to have the full list at hand to re-merge membership after the node
becomes reachable again.
Application Management Command Reference
Use these commands to perform maintenance tasks on your managed instance. Except where noted, you'll find these
in /usr/local/jive/bin.
| Administering the Platform | 60
Note: Execute these commands as the jive user. For example, if you've got ssh access as root to your host
machine, you can use the following command to switch to the jive user:
sudo su - jive
appadd Command
Jive Application Addition tool (appadd). Adds a new application configuration and configuration to the standard
locations. Optional parameters may be overridden for hosting multiple instances on a physical host however such
configurations are not recommended.
appadd [options] name
Note: The appadd command does not accept images or upgrade as name arguments because these names are
reserved for upgrade tasks.
Short
-h
Long
Description
--version
Show program's version number and exit.
--help
Show help message and exit.
Table 5: HTTPD Options
Configures the HTTPD integration configuration options.
Short
Long
Description
--httpd-addr=ADDR
HTTPD listen address [default 0.0.0.0]
--vhost
Create a virtual host for HTTPD integration as opposed to proxy
directives (default is to create proxy directives only). Default: False
--dedicated-httpd-enable
Enable Dedicated HTTPD Server.
--dedicated-httpd-port=PORT
Port to use for dedicated httpd server.
Table 6: Application Options
Configures general application server options.
Short
Long
Description
-s PORT
--server-port=PORT
Application server management port [default 9000]
-j OPT
--java-options=OPTS
Additional JRE options to use with the Java runtime.
--custom-option=OPTS
Additional application option to use with the Java runtime.
-c PORT
--cluster-port=PORT
Multicast cluster port [default 9003]
-m
ADDR
--cluster-addr=ADDR
Multicast cluster address [224.224.224.224]
--cluster-jvm-route=ROUTE
Cluster JVM Route setting for Apache-based load balancing.
--cluster-local-port-enable
Enable local multicast cluster port [optional].
--cluster-local-port=PORT
Local multicast cluster port [default 9005]
--cluster-local-addr=ADDR
Local multicast address [optional].
--cluster-name=NAME
Local cluster name [optional].
--cluster-member=NAME
Local cluster member [optional].
| Administering the Platform | 61
Short
Long
Description
--snmp-enable
Enabled SNMP monitoring [optional]
--snmp-port=PORT
SNMP port [optional] [default 10161]
Table 7: HTTP Options
Configures application server HTTP options.
Short
Long
Description
-z PORT
--http-port=PORT
Application server http port [default 9001]
--http-addr=ADDR
Application server http listen address [default 127.0.0.1]
Table 8: HTTP Monitor Options
Configures application server HTTP monitor options.
Short
Long
Description
--http-monitor-port=PORT
Application server http monitor port [default 9002].
--http-monitor-addr=ADDR
Application server http monitor listen address [default 127.0.0.1].
Table 9: Application Options
Configures application options.
Short
Long
-p PATH --context-path=PATH
Description
Application context [default '/']
--app-cache-size=BYTES
Static cache size in bytes [default 10240 (10M)]
--app-cache-ttl=MS
Static cache TTL in ms [default 10000 (10 minutes)
Table 10: General Options
General configuration not specific to any subsystem. Most should only be used for testing.
Short
Long
Description
-v
--verbose
Be verbose about what actions are being taken. Default: False
-d
--debug
Show debug information. Default: False
--source=SOURCE
Use the given application template path and not the default in
JIVE_HOME. Default: None
--destination=DEST
Output the application to the given path and not the default in
JIVE_HOME. Default: None
--overwrite
Overwrite any existing application artifacts. Default: False
--force
Ignore any warnings and proceed, potentially causing conflicts with
other applications. Default: False
--no-link
Use copies instead of symlinks when creating the application
[default is to link]
--auto-port
Automatically determine ports to use.
| Administering the Platform | 62
appls Command
appls [options]
Lists information about platform-managed applications.
Short
Long
Description
--version
Show program's version number and exit.
-h
--help
Show help message and exit.
-f
--failed
Show only failed application instances. Default: False
-r
--running
Show only running application instances. Default: False
-s
--stopped
Show only stopped application instances. Default: False
-q
--quiet
Remove unnecessary text for combination with other utilities.
Default: False
-b
--brief
Remove even more unnecessary text for combination with other
utilities.
-n
--name-only
Show only names meeting filter criteria. Default: False
-v
--verbose
Be verbose about what actions are being taken. Default: False
-d
--debug
Show debug information. Default: False
apprestart Command
Stop and starts a template-configured application by name. If no name is given, all configured applications are started.
apprestart [options] name
Short
Long
Description
--version
Show program's version number and exit.
-v
--verbose
Be verbose about what actions are being taken. Default: False
-d
--debug
Show debug information. Default: False
apprm Command
Jive application removal tool (apprm). Stops and removes a managed application given a valid application name.
apprm [options] name
Short
Long
Description
-h
--help
Show help message and exit.
--version
Show program's version number and exit.
-s
--no-stop
Do not stop the application before removing. Default: True
-v
--verbose
Be verbose about what actions are being taken. Default: False
-d
--debug
Show debug information. Default: False
| Administering the Platform | 63
appsnap Command
Jive Application Snapshot tool (appsnap). Passively gathers information about a running system and applications.
appsnap [options] [name]
Short
-h
Long
Description
--version
Show program's version number and exit.
--help
Show help message and exit.
Table 11: Snapshot Options
Defines the interval and count of snapshots taken, system or application.
Short
-c
COUNT
Long
Description
--no-sys
Do not gather top-level system information. Default: True
--no-eae=EAE
Will not gather appsnap information from the Activity Engine. This
option is available in versions 5.0.1.1 and higher.
--count=COUNT
Sample count to take [default 1]
-i
--interval=INTERVAL
INTERVAL
Time between samples [default 1]
--jstack=PATH
Use the given path to the jstack binaries for sampling applications.
Default: None
--jstack-opts=OPTS
Pass the given options to jstack binary for sampling detail. Default:
None
--force=FORCE
Force Java thread dump. This may be fatal to the process resulting in
failure if the dump is successful.
-o
--out=OUTPUT
OUTPUT
Append output to the given file creating if the file does not exist
[default STDOUT]
Table 12: General Options
General configuration not specific to any subsystem. Most should only be used for testing.
Short
Long
Description
-v
-verbose
Be verbose about what actions are being taken. Default: False
-d
-debug
Show debug information. Default: False
appstart Command
Jive Application Start tool (appstart). Starts a template-configured application by name. If no name is given, all
configured applications are started.
appstart [options] name
Short
-h
Long
Description
--version
Show program's version number and exit.
--help
Show help message and exit.
| Administering the Platform | 64
Short
Long
Description
-v
--verbose
Be verbose about what actions are being taken. Default: False
-d
--debug
Show debug information. Default: False
appstop Command
Jive Application Stop tool (appstop). Stops a template-configured application by name. If no name is given all
configured applications are stopped.
appstop [options] name
Short
Long
Description
--version
Show program's version number and exit.
-h
--help
Show help message and exit.
-r
--retain
Retain work directory for the application (default false)
-v
--verbose
Be verbose about what actions are being taken. Default: False
-d
--debug
Show debug information. Default: False
appsupport Command
Jive Application Support tool (appsupport). Passively gathers information about a running system for communicating
with Jive support.)
appsupport [options]
Short
-h
Long
Description
--version
Show program's version number and exit.
--help
Show help message and exit.
Table 13: System Options
Fine tune the system information captured by the support dump. By default, all data is captured.
Short
Long
Description
-m
--no-mem
Do not gather memory information. Default: True
-c
--no-cpu
Do not gather CPU information. Default: True
-u
--no-uptime
Do not gather uptime information. Default: True
-s
--no-os
Do not gather operating system information. Default: True
-z
--no-limits
Do not gather ulimit information. Default: True
-x
--no-sysctl
Do not gather sysctl information. Default: True
-n
--no-network
Do not gather network information. Default: True
-f
--no-firewall
Do not gather firewall information. Default: True
-l
--no-logs
Do not gather system log information. Default: True
Table 14: Jive Options
Fine tune the jive-specific information captured by the support dump. By default, all data is captured.
| Administering the Platform | 65
Short
Long
Description
-e
--no-jive-config
Do not gather Jive-specific configuration information.
-a
--no-jive-apps
Do not gather jive application data.
-i
--no-httpd-logs
Do not gather Jive HTTPD logs.
-j
--no-jive-logs
Do not gather jive-specific logs.
-p
--no-db-logs
Do not gather local system database logs.
-t BYTES --limit-logs=BYTES
Limit log captures to a given number of bytes [default no limit]
-q
--no-docverse-logs
Do not gather logs related to the document conversion feature.
-y
--no-docverse-config
Do not gather configuration related to the document conversion
feature.
-g
--no-cache-logs
Do not gather cache-related logs, including cache-gc.log, cache.log,
cache-service.log, and cache-service.out.
-w
--no-cache-config
Do not gather cach-related configuration, including cluster.xml,
server.properties, and stores.xml.
Table 15: General Options
General configuration not specific to any subsystem. Most should only be used for testing.
Short
Long
-o
--out=OUTPUT
OUTPUT
Description
Append output to the given file creating if the file does not exist
[default STDOUT]
--no-time
Do not add timestamps to output. Default: True
-v
--verbose
Be verbose about what actions are being taken. Default: False
-d
--debug
Show debug information. Default: False
dbbackup Command
Backs up the application database. To create a full backup, omit the options. For more on backups, see the Performing
a Jive System Database Backup.
dbbackup [options]
Short
Long
Description
-s
SEGMENT_FILE
Database segment to archive.
-d
DATA_PATH
Path to the data source.
-p
DESTINATION_PATH
Path to which the archive should be written.
manage Command
Starts, stops, or restarts the application. Also checks the status of the application.
This command operates on the application it is installed with (app_name).
| Administering the Platform | 66
Location: /usr/local/jive/applications/<app_name>/bin
manage [start | stop | restart | status]
upgrade Command
Jive platform upgrade tool. Used to detect if an upgrade is in progress and which behaviors need to be executed,
optionally executing them depending on command options. When invoked with no options, returns 0 if an upgrade is
in progress, 1 otherwise.
upgrade [options]
Short
-h
Long
Description
--version
Show program's version number and exit.
--help
Show help message and exit.
Table 16: Upgrade Options
Configures upgrade behaviors.
Short
Long
Description
-x
--dry-run
Show upgrade tasks that would be performed. Default: False
-e
--execute
Perform any pending platform upgrade tasks. Default: False
-r
--reset
Mark all unperformed upgrade tasks as complete and exit. Default:
False
-f PATH
--database-file=PATH
Use alternate upgrade database file. Default: None
Table 17: General Options
General configuration not specific to any subsystem. Most should only be used for testing.
Short
Long
Description
-v
-verbose
Be verbose about what actions are being taken. Default: False
-d
-debug
Show debug information. Default: False