IBM TRIRIGA Version 10.4 How to Integrate Data into Tririga Real Estate Environmental Sustainability Impact Manager © Copyright International Business Machines Corporation 2014. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. CONTENTS List of Figures.................................................................................................................... 3 Revision History................................................................................................................. 4 1 Introduction...................................................................................................................... 5 2 Overview......................................................................................................................... 6 2.1 TRIRIGA.......................................................................................................... 8 2.2 Building Management System or Aggregator................................................... 8 2.3 Data Collection Code....................................................................................... 9 2.4 Csv-to-Fact tables ETL..................................................................................... 9 3 Summary of Toolkit Content............................................................................................. 9 4 Moving Data from BMS to Staging Tables..................................................................... 10 4.1 TRIRIGA Interface.......................................................................................... 10 4.2 Building Management System (BMS)............................................................. 11 4.3 Data Collection Code..................................................................................... 12 4.4 Using IBM Tivoli Monitoring............................................................................ 14 4.5 Csv-to-Fact tables ETL................................................................................... 15 5 Building a Custom ITM Agent........................................................................................ 15 5.1 Considerations for Agent................................................................................ 15 5.1.1 Defining Data Sources ................................................................................. 16 5.1.2 Subnodes...................................................................................................... 17 5.1.3 Attribute Groups and Attributes..................................................................... 18 6 Product documentation.................................................................................................. 22 Page 2 of 24 LIST OF FIGURES Figure 1. Overview of write to TRIRIGA database...............................................................6 Figure 2. Overview of write to CSV file................................................................................7 Page 3 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data REVISION HISTORY Date Version Comments 30 Jan 2014 1.0 Initial version. 10 Feb 2014 1.1 Integrated review comments from KA and CR 03/18/14 1.2 Added ITM agent information in section 5 Page 4 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data 1 Introduction In the IBM® TRIRIGA® Real Estate Environmental Sustainability Impact Manager application, meter and sensor data from buildings can be integrated so as to allow reports and analytics to be run against that data. Prior to version 10.4, the focus was integration of monthly data into TRIRIGA Real Estate Environmental Sustainability. New features in TRIRIGA Real Estate Environmental Sustainability Impact Manager v10.4 additionally leverage hourly and daily data from meters and sensors. This data may be aggregated into a building management system (BMS). Energy data from meters and sensors may also be collected into systems that complement the BMS. Because terminology is not consistently used throughout the industry and there may be some readers who are new to energy management terminology, let us first cover some basic terms that will be used in this document: Building management system (BMS) is a system, typically a combination of hardware and software, that monitors and controls various aspects of a building such as heating, ventilation, and air conditioning (HVAC) systems. A BMS may also be referred to as a building automation system (BAS) or supervisory control and data acquisition (SCADA) system. Aggregator is a system that collects data but due to nuances in functionality is not technically a BMS. These systems will be referred to as aggregators because they aggregate information from meters, sensors, or building management systems while not necessarily having the functionality of a building management system. Meters and Sensors are devices that measure characteristics. Sometimes meter and sensor are used interchangeably. A meter is a device that measures flow or usage, typically electrical, water, or fuel. Meters may be built into a larger device, such as an air handling unit (AHU) measuring its electrical usage, or be adjacent to a device, in which case it is typically measuring the power or energy being consumed by that device. Alternatively, it may be connected to an electrical feed to many devices or even an entire building, in which case it is measuring the power or energy used by all devices being fed by that wire. Sensor is a device that measures non-electrical elements. It may measure the flow rate of gas or liquid, temperature, humidity, fan speed, percent openness of a valve, whether a space is occupied, whether an air handling unit is in economizer mode, etc. It may be built into a larger device, such as an Air Handling Unit, or be adjacent to a device, in which case it’s typically measuring conditions under the control of that device or critical to the operation of that device. Alternatively, it may be more of a stand alone sensor, such as a zone temperature in the vicinity of several Air Handling Units that could affect its value. Because of the finer granularity of the data, the increased frequency of moving it into TRIRIGA, and the increased amount of data, new data integration techniques are necessary. Page 5 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data This document will cover some of the various techniques that can be used to integrate this data. Of course, TRIRIGA supports various ways to integrate external data. 2 Overview Let’s start with the overview and then drill down into each part of it. This document will focus primarily on 2 methods of collecting data from BMSs or aggregators. In these figures, the GREEN boxes represent toolkit content that is provided on Service Management Connect (SMC) to make the integration easier. DOTTED LINES represent entities, such as code or files, that customers, business partners, or services is responsible for. The first method looks like this: Toolkit Overview – Method #1 Reports Fact Tables Scorecards Monthly data Daily data Analytics Hourly data Analytic Sample ETLs Staging Tables Sample OM pkg Monthly data Daily data Hourly data Sample ETLs Data Collection Code Data Collection Code Documentation of general concepts 5 Water Building Management System Building Management System Or Or Aggregator Aggregator Meters Elevators Fire HVAC Lighting Security © 2014 IBM Corporation Figure 1. Overview of write to database The second method looks like this: Page 6 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data Toolkit Overview – Method #2 - Writing from csv file Scorecards Reports Fact Tables Monthly data Analytics Daily data Hourly data Analytic ETL ETL ETL Sample ETLs csv Data DataCollection CollectionCode Code Documentation of general concepts Water 6 Building Management System Building Management System Or Or Aggregator Aggregator Meters Elevators Fire HVAC Lighting Security © 2014 IBM Corporation Figure 2. Overview of write to CSV file Both methods are very similar, however in the first method the data collection code writes into a database whereas the second method writes to a comma separated value (csv) file. In both cases, it is moved from the database or csv file into TRIRIGA using sample ETLs. For a specific BMS or aggregator these methods are mutually exclusive. Do not collect data from the same BMS or aggregator using both methods because that will cause duplicate data in your fact tables and lead to erroneous results. If you have multiple BMSs or aggregators, you may use different methods to communicate to each. There are four layers in the above figure as follows: - TRIRIGA - Data collection - Building management system (BMS) or aggregator - Meters and sensors Let’s take a look at each of these layers. Page 7 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data 2.1 TRIRIGA First, at the top, in red, is TRIRIGA. The diagram shows 3 tiers within TRIRIGA. Within the various TRIRIGA applications, the top, customer-facing layer are the reports, scorecards, and workflows (analytics) that render the data in meaningful and actionable ways. There are no toolkit items in this domain; all of this functionality is provided by the TRIRIGA Real Estate Environmental Sustainability Impact Manager application. Refer to the product documentation at http://pic.dhe.ibm.com/infocenter/tivihelp/v49r1/topic/com.ibm.tri.doc_10.4.0/product_landin g.html for more details. The next layer includes the fact tables. The product ships with four fact tables: monthly, daily, hourly, and analytics. There are no toolkit items in this domain; all of this functionality is provided by the TRIRIGA Real Estate Environmental Sustainability Impact Manager application. Refer to the product documentation at http://pic.dhe.ibm.com/infocenter/tivihelp/v49r1/topic/com.ibm.tri.doc_10.4.0/product_landin g.html for more details. The toolkit content starts one more layer down with the staging tables. Default staging tables are included in the toolkit. We recommend using staging tables. As you can see in the diagram, we recommend three staging tables which correspond to the three summarization intervals, monthly, daily, and hourly. The staging tables provide a temporary repository within the TRIRIGA tablespace that facilitates validation and augmenting of the data as it is moved from the staging tables to the fact tables. There are sample extract, transform, and load scripts (ETLs) that move the data from the staging tables to the fact tables. Refer to the SMC content here TBD for details about the functionality of the staging table-to-fact table ETLs. Note that the staging tables could be bypassed, however the functionality in the sample staging table-to-fact table ETLs would need to be provided elsewhere in the process. 2.2 Building Management System or Aggregator At the bottom of the diagram is the building management system (BMS) or aggregator. Depending on how your environment is set up, data from all of the buildings in your enterprise may be consolidated or the data may be distributed throughout your enterprise. The specific functionality provided by BMSs and aggregators varies by vendor and version, so check with your vendor for specifics. In general most, if not all, BMSs and aggregators have an internal repository, or cache, of data from connected meters and sensors. Additionally, they may have a repository that stores previous (historical) values. Page 8 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data The mechanisms for accessing these repositories will vary by vendor, but may include one or more of the following mechanisms: application programming interface (API), export tool within the BMS, or support for SQL. 2.3 Data Collection Code In the middle of the diagram is the code that will move the data from the BMS or aggregator into TRIRIGA. The technique chosen to get the data out of the BMS or aggregator may be an extract, transform, and load (ETL) script, shell script, tooling like the IBM Tivoli Monitoring Agent Builder, or code in a programming language like Java or C++. 2.4 Csv-to-Fact tables ETL Figure 2 depicts an alternative data collection method. If you can get your data into a comma separated value (csv) file, then you may use a sample ETL provided in the toolkit to populate the Fact tables from that data. 3 Summary of Toolkit Content Figures 1 and 2 highlights the toolkit contents in green. The toolkit contains the following information and sample code: - An Object Migration (OM) package that contains sample staging tables - Sample ETLs (one each for monthly, daily, and hourly) that move data from the staging tables to the fact tables. These samples work with the sample staging tables that are contained in the OM package. - Sample ETLs that move data from an external data source to the staging tables. These samples work with the sample staging tables contained in the OM package and a IBM Tivoli Monitoring (ITM) Tivoli Data Warehouse (TDW) schema designed for extensibility. - Sample ETLs for populating the fact tables from a comma separated value (csv) file - Documentation (this document) regarding things to think about when you are implementing data collection - Documentation (this document) regarding general concepts around building management systems and aggregators Page 9 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data 4 Moving Data from BMS to Staging Tables For the purpose of this discussion, staging tables refers to the four sample staging tables used by TRIRIGA Real Estate Environmental Sustainability Impact Manager: triAssetEnergyUseMFact, triAssetEnergyUseDFact, triAssetEnergyUseHFact, and triAssetAnalyticHFact. In TRIRIGA Real Estate Environmental Sustainability Impact Manager, you should choose the most appropriate technique for your environment to move your meter and sensor data from the BMS or aggregator repository into the staging tables. Factors that might influence this decision might include everything from BMS interfaces available to the skill set you have available. There are ETL tools, like Tivoli Directory Integrator (TDI), that are very powerful. Tivoli Directory Integrator is provided with TRIRIGA Real Estate Environmental Sustainability Impact Manager. Additionally, there are monitoring tools such as IBM Tivoli Monitoring that have capabilities that are conducive to this implementation, such as Agent Builder and the Warehouse agents. While there are far too many factors to make a specific recommendation, there are considerations that are common regardless of the technique that you choose. The following sections will use a question and answer format to draw attention to certain facets that you might not have thought about. 4.1 TRIRIGA Interface Do you use the staging tables, or bypass them and write directly into the fact tables? The advantage of using the staging tables is that you can use the sample staging table-tofact table ETLs. The name of these can be misleading because these ETLs do much more than just move data from staging tables to fact tables. Refer to the SMC content here TBD for details. If you choose to write directly to the fact tables make sure that you incorporate the appropriate functionality from these samples into your code. Note that neither staging tables nor fact tables are designed to be warehouses. This applies generally to TRIRIGA as well as to the TRIRIGA Real Estate Environmental Sustainability Impact Manager staging tables and fact tables. These tables should not contain raw data; they should only contain summarized data. These tables should not contain data for every meter in the enterprise; they should only contain data that is currently being consumed by reports, charts, or analytics or data for which there is a plan to have reports, charts, or analytics use. Page 10 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data What do you need to know in order to insert rows into the TRIRIGA staging tables? You are going to need to know which columns are required versus optional. You will need to know what the time stamp format is for all of the time stamps. You will need to know the specific values supported for any special strings. You will need to know what units of measure (UOM) are expected for each numeric field. All of these answers can be found in the staging table documentation here TBD. How often should data be pushed into TRIRIGA? The sample staging-to-fact ETLs are designed to run every hour. This allows analytics to detect issues within an hour of when they occur so that corrective action can be taken quickly for maximum benefit. If you plan on running the analytics less often, for example daily instead of hourly, then that lessens the need to push data into TRIRIGA every hour. Be aware that this also affects reports and charts in TRIRIGA and if the data collection gets out-of-sync with the rules (if the rules run just before the data is available), it could be the next collection interval before a problem is detected. The longer the interval, the more significant the impact will be. Generally is preferable to push data into TRIRIGA every hour. 4.2 Building Management System (BMS) When you design and implement your interface to the BMS, you will need to have a good understanding of the details of your BMS. The design and implementation will depend on many factors. While the specifics will depend on your BMS, you can start with the following general questions: How often will you want to collect data from the BMS? The answer to this will provide a lot of direction to you. If the BMS does not store historical data, then you need to collect data often enough such that the sample size yields a reasonably accurate average for the hour. Generally collecting every 15 minutes yields a reasonable average while not placing too much of a burden on the network nor BMS. If the BMS does store historical data, then you can collect data hourly or even less frequently but will need to ensure that the BMS populates the historical repository at an appropriate frequency and will need to understand how long the data persists in the BMS's historical repository. If the BMS supports exporting its data, can it be automated, and how secure is it? If the BMS supports exporting data to a spreadsheet, then is it a command that needs to be run manually on the BMS, or can you schedule or invoke it from an external script (shell script or batch/command file)? If it can support invocation from an external script, how is security handled (does it require authentication before the command can be run)? Where will you store the resultant spreadsheet and how will it be secured? While this may be the simplest technique for collecting data from the BMS, make sure security and automation match your needs. Page 11 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data Does the BMS summarize the data? Does the BMS just provide the raw values collected from the devices/meters, or does it have the ability to summarize the data for hourly, daily, and monthly intervals? For monotonically non-decreasing values, like energy use, TRIRIGA expects the summarized value to represent the difference between the latest sample and the first sample of a given interval. For all other numeric values, such as temperatures, TRIRIGA expects an average of all samples taken during the interval. If the BMS does not summarize the values, then it will need to be done after the values are collected. Does the BMS provide normalized data? If your enterprise resides in a single geographic location, then it is possible that all of your BMSs use the same units for similar metrics, for example Fahrenheit for all temperatures, and kWh for all energy. If so, then normalization is not necessary, just make sure that the thresholds of the analytics are suitable for the units being used. Out-of-the-box thresholds assume metric units and will not be acceptable if you are using English units. If you have disparate units, and you don’t want to maintain multiple thresholds in your analytics, then the data will need to be normalized such that the same units of measure are used consistently across the enterprise before it gets pushed into TRIRIGA. Most BMSs support derived, also called virtual, data points. Thus even if a meter reports temperature in degrees F, the BMS will support creating a derived data point which the reported value converted to degrees C. However, if this is not already programmed in the BMS, it may be cost prohibitive to add all of the data points. If data needs to be normalized and it’s cost prohibitive to do so in the BMS, then it will need to be done after the values are collected from the BMS but before they are put into TRIRIGA. This applies to both numeric and string values. For String values it would be preferable to have On represented by a single string, like “On” versus requiring TRIRIGA to interpret “true”, “Yes”, “On”, and “1” all as “On”. If you have multiple sites and multiple BMS vendors, are there any common interfaces? If you have multiple BMS systems, you may want to look for a common way to interface to all of them. While there is no standard protocol that every vendor supports, many vendors support one or more of the protocols of the Object Linking and Embedding for Process Control (OPC) Foundation. 4.3 Data Collection Code Because so many variables that depend on your environment, the skills that are available in your organization, and budget, it is difficult to be specific about the data collection code. Generally speaking, the code will need to interface with the BMS and the code will need to interface to the TRIRIGA database, specifically the staging tables. Page 12 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data Things to think of when you design the code include the following questions: Do you collect data as current/”live” values or historical values? Most, if not all, BMSs will support a current/”live” repository and allow clients to retrieve those values. Support for historical collection is less certain, and could be less consistent in terms of how long values are stored. If your BMS has an historical repository and a client interface into it, then that may simplify your client and could alleviate the need for your code to have a persistent storage. However, make sure you understand how long data will be stored in the historical repository, consider client, network, and BMS performance of collecting multiple values for potentially hundreds or thousands of data points. Do I need to summarize the data? If you are collecting the current/”live” values of the data points from the BMS, then the BMS will not be summarizing the values. Thus, you will need to summarize the values after collecting them from the BMS and before pushing them into TRIRIGA. This will require a persistent store of values collected throughout the interval. A data warehouse would be one way of doing this. Tools such as IBM Tivoli Monitoring (ITM) provide a robust data warehouse solution which includes summarization and pruning of data no longer when it is no longer needed. If you are collecting from the historical repository, it is possible that your BMS or aggregator summarizes data. If not, then you will either need to calculate the appropriate summarized values. If you plan on using other tools to summarize the values, e.g. calculate the average for the hour, they may be sensitive to when the data was collected. For example, if using ITM, the Summarization and Pruning agent will not handle summarizing rows that were collected at the same time, even if they actually represent different sample times. Do I need to normalize the data? What does normalization mean? What if meters in some of your buildings report temperature in Fahrenheit and others report temperature in Celsius? What value do you set for the threshold of the analytics? Charts that graph those values will make it difficult to compare apples-to-apples. TRIRIGA will not normalize the units of measure; converting similar metrics to the same units needs to be done before the data is pushed into TRIRIGA. If you use consistent units of measure throughout your enterprise or if you are willing to have copies of the same analytics with thresholds that reflect various units, then you may decide normalization is not needed. If either of these is not true, then normalization is needed somewhere in the process before the data is pushed to TRIRIGA. While the BMS is capable of normalizing the values, probably the most convenient place for normalization is in the data collection code. Refer to the Staging table documentation here XXX or the Fact table documentation here XXX for details of the expected units of measure in TRIRIGA. Can you collect the meter units from the BMS? Do you know all of the units used throughout your enterprise, and the abbreviations used by each of your BMSs, or do you need to make adding new conversions easy to do? Page 13 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data What do I do if communication to the BMS is disrupted? Because your code will likely be running on a different box, remote from the BMS, fault toleration will be important. If you select a technique where a pipe/connection is set up and commands and responses flow through that pipe, then you need to consider how to handle that pipe getting broken between data collection intervals (while no data is flowing) as well as during data collection. Can you tolerate skipping/missing one interval of collection? Can you just collect the missed data or do you recollect all of the data for the interval? What does the BMS support? If you are completely unable to collect any data, then that's somewhat easy, you just don't create a raw data row to represent that time interval. When the raw data is summarized, there will be fewer, or no, rows for a particular interval. If there are no raw rows for a summarization interval, then there should be no summarized rows for that interval. If there fewer rows, then the average will be less precise. If this becomes a persistent problem, you may want to have your data collection code send an SNMP event, or you may want to update the staging table-to-fact table ETLs to send notifications when the number of samples is consistently low. What do I do if some, but not all data points are collectible? There could be times when the communication to the BMS is working, but some data is not collectible. As with a complete communication disruption, you should consider how to identify the data point(s) that failed, can you retry just the data points or do you recollect all data points. The out-of-the-box analytics and reports expect null for data points that either do not exist or can not be collected. If you are unable to collect some data points associated with a device, but can collect some, the problem is more difficult. Since all data points associated with a device are stored in a single row, you want to create the raw row, but for values that cannot be populated, what do you do? If you are using ITM Agent Builder, null is not allowed for an attribute, so the staging table-to-fact table ETLs expect -1 (for numbers) or “-1” (for strings) and inserts null into the fact table when those values occur. If ITM Agent Builder is not used, then you may use null or -1 in the raw data row. If this becomes a persistent problem, you may want to have your data collection code send an SNMP event, or you may want to update the staging table-to-fact table ETLs to send notifications when the number of samples is consistently low. Do I need to have a warehouse? As was stated earlier, the TRIRIGA Staging table and Fact tables are not intended to be a warehouse. What this means is that 1) raw data which has not been summarized should not be stored in TRIRIGA, and 2) pruning/cleaning of the data in TRIRIGA should consider TRIRIGA tablespace size and performance. If you decide you need a warehouse, perhaps because you need to store the raw data for subsequent summarization, then that should be done outside of TRIRIGA. If you decide to implement a warehouse, you may want to avoid storing data as key/value pairs. Implementing a table where all meters for a specific device and time interval are stored in a single row (a column for each meter) may be more efficient. Page 14 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data How do I specify the data points to be collected? A BMS may contain data points that are completed unrelated to energy and do not need to be collected. You should consider collecting only pertinent data. As was stated earlier, the TRIRIGA Staging tables and Fact tables should only contain data currently being used for or expected to be used for analytics and charts. However if you implement a warehouse outside of TRIRIGA, you may choose to collect additional data and store it in your warehouse. If you do so, make sure you consider impacts to the BMS, network, and warehouse performance. 4.4 Using IBM Tivoli Monitoring Some of the issues mentioned above can be solved by using IBM Tivoli Monitoring (ITM). ITM provides a robust warehousing solution. Data that is collected by an ITM Agent can be warehoused in the Tivoli Data Warehouse (TDW) and the ITM Summarization & Pruning Agent will take care of summarizing the data for hourly, daily, and monthly intervals. It also can prune old rows so as to control the database size. It is very customizable. If no existing ITM Agent interfaces to your BMS, then you can use the Agent Builder to create a custom agent. The Agent Builder provides an eclipse-based development environment and supports several out-of-the-box data sources that might meet your needs and prevent you from having to write and maintain Java or C++ code. If no out-of-the-box data sources interface to your BMS, then there could still be advantages to using ITM’s custom data provider support rather than writing everything from scratch. If you choose a solution that leverages ITM, there are sample warehouse-to-staging table ETLs to help you quickly implement that part of the data flow. Refer to Building a Custom ITM Agent for more details. 4.5 Csv-to-Fact tables ETL Some of the issues mentioned above can be solved by using the sample csv-to-fact table ETLs. These ETLs accept a csv file as input. The file should contain normalized raw data. The ETLs will summarize the data and put it directly into the Fact tables. Page 15 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data For more information about the csv-to-fact table ETLs, see SMC here TBD. 5 Building a Custom ITM Agent The IBM Tivoli Monitoring (ITM) Agent Builder can be used to build a custom agent to collect data from your BMS or aggregator. The Agent Builder User’s Guide is available here: http://publib.boulder.ibm.com/infocenter/tivihelp/v61r1/index.jsp?topic= %2Fcom.ibm.itm.doc_6.3%2Fbuilder%2Fagentbuilder_user.htm The Agent Builder is a very flexible tool, and could be overwhelming to someone new to the technology. This section will provide some guidance to help you design and implement your data collection process. 5.1 Considerations for Agent Within the Eclipse-base Agent Builder, there are a few areas that are particularly important to understand if you are new to this tool. These areas include the following: • Data Source • Subnodes • Attribute Groups and Attributes 5.1.1 Defining Data Sources When you use the Agent Builder, one of the first decisions you will need to make is the data source. There is some general information here: http://publib.boulder.ibm.com/infocenter/tivihelp/v61r1/index.jsp?topic= %2Fcom.ibm.itm.doc_6.3%2Fbuilder%2Fab_data_sources.htm As the name implies, you are configuring or defining the mechanism that will be used as the source of the data, or in other words, how the data will be collected. The list of supported data sources might be overwhelming, but there are some that can be excluded immediately as inappropriate for use in collecting data from building management systems. The following data sources are unlikely to be appropriate for collecting data from building management systems: Page 16 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data • Process • Windows service • WMI • Perfmon • CIM • SNMP Events • Ping • AIX binary Log • Windows Event Log • Command return code There are still a lot of options remaining, which is both good and bad. One or more of the following could be appropriate: • SNMP • JDBC • JMX • HTTP • SOAP • Log File • Output from a script • Socket • Java API While SNMP, JMX, HTTP, SOAP, and Socket could be feasible for building management systems, the appropriateness of each will heavily depend on what the building management supports as well as how complex the payload is. While some vendors might support SNMP, there could be restrictions on how their SNMP MIB can be used within solutions. JMX may work as a transport protocol, but the agent builder functionality may not be able to process the payload as needed. While some vendors might support HTTP or a SOAP interface, if they have any special DLLs that need to be referenced, you may not be able to reference them from the data source. This leaves JDBC, Log File, Output from a script, and Java API. If the building management system exposes data via a JDBC connection, then that alternative may be the most secure and easiest to implement. Restricting data collection to only the points of interest to you, mapping a data point to a device attribute, and normalization may be more difficult or need to be hard coded. Hard coding these would Page 17 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data mean that adding a new attribute will require a rebuild of the agent. Additionally, depending on how the building management system organizes the data, it could be difficult to pivot the data so that one row reflects all of the attributes associated with a device for a specific time period. If the building management system can write data point values to a log file, then the Log File data source may be appropriate. Maintaining security of data written to a flat file should be considered. Also, the same issues that exist for JDBC with respect to hard-coding data points of interest, mapping, normalization, and pivoting the data also exist with the Log File data source. Output from a script and Java API are more flexible because it's easier to manipulate the data before passing it along to the agent process. You have much more control over what's collected, how it's mapped, massaging the values, and organizing the data. For these data sources, you will create a separate program or executable. For Output from a script, you may use any programming language you prefer. For example if the building management system has Visual Basic samples, you might decide to use Visual Basic. You may decide to make some behavior controlled by external configuration files (data points to collect, mapping, normalization formulae, etc.) in order to avoid having to modify the code when small changes are needed. If the building management system supports a persistent connection, you may need to do something extra to ensure you don’t initiate and tear down the connection on every call as that could be very inefficient. You may also want to collect multiple data points from the BMS in one call rather than one-at-a-time. 5.1.2 Subnodes When you use the Agent Builder, another decision you will need to make is whether to use subnodes. There is some general information here: http://publib.boulder.ibm.com/infocenter/tivihelp/v61r1/index.jsp?topic= %2Fcom.ibm.itm.doc_6.3%2Fbuilder%2Fab_snodes.htm This affects the organization of the data and affects how much data is transferred between the data source and the agent. Because of the potentially very large amount of data that you may be collecting from a building management system, the recommendation is to use subnodes. Specifically, you should have a single subnode type, such as BMS, and represent each device as a subnode of that type. For example, AHU1 would be represented as a subnode. Organizing the data by subnodes provides very useful groupings of data if you decide to use the Tivoli Enterprise Portal (TEP). Page 18 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data 5.1.3 Attribute Groups and Attributes Attribute groups correlate to database tables and attributes correlate to database table columns. As such, how attribute groups and attributes are defined have a very significant affect on how the data is organized in the Tivoli Data Warehouse (TDW). While the Agent Builder abstracts database concepts so that someone doesn't need to know about databases to use the Agent Builder, someone familiar with database concepts can probably see these concepts leaking into the options presented on the GUI. This is especially true when defining attributes: http://publib.boulder.ibm.com/infocenter/tivihelp/v61r1/index.jsp?topic= %2Fcom.ibm.itm.doc_6.3%2Fbuilder%2Fab_att_types.htm If you are going to use the IBM Tivoli Monitoring (ITM) Summarization and Pruning Agent to summarize your raw data, then choosing the appropriate numeric type of Gauge versus Counter will ensure the resulting summarized values are useful. For building management systems, Counter should be used for any monotonically non-decreasing number, for example an electrical energy meter that continuously increases month after month until if eventually wraps back to zero. When Counters are summarized, the delta from the start of the interval to the interval is calculated and is particularly useful for energy data. Numerical values like temperatures, humidity, flow rates, instantaneous power typically increase and decrease over time, so the Gauge type should be used. When Gauges are summarized, the average and maximum are calculated and are particularly useful for these types of environmental metrics. If you are going to write your own summarization code, not use the ITM Summarization and Pruning Agent, then it doesn't matter whether you use Counter or Gauge. If you are going to use the sample warehouse-to-staging table ETLs, then there are specific attribute groups and attributes that must be defined. If you are going to write you own ETLs to move your data into staging tables, then there is no restriction on the attribute groups and attributes you define. If you want to use the ITM Summarization and Pruning Agent and the sample warehouseto-staging table ETLs, the following attribute groups and attributes are required: • There must be an attribute group named K79_BMS_DEVICE_PROPERTIES. That attribute group must contain the following attributes: • Summary_Column_Name – this should be defined as a string with a length of 300. This will be populated with the column name in the summary attribute group that is associated with this point • Tririga_Property - this should be defined as a string with a length of 300. This will be populated with the field name in the sample staging table associated with this point Page 19 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data Examples: Row Summary_Column_Name Tririga_Property 1 AVG_Gauge32_01 triFactCoolantFlowLMNU 2 AVG_Gauge32_02 triFactCoolingValvePctNU 3 AVG_Gauge32_03 triFactEnergyUseNU 4 AVG_Gauge32_04 triFactExhaustFanCurrentAmpNU 5 AVG_Gauge32_05 triFactExhaustFanOutputPctNU 6 AVG_Gauge32_06 triFactHeatingValvePctNU 7 AVG_Gauge32_07 triFactHumidifierValvePctNU 8 AVG_Gauge32_08 triFactMixAirTempCNU 9 AVG_Gauge32_09 triFactOutsideAirDamperMinPctNU 10 AVG_Gauge32_10 triFactOutsideAirDamperPctNU 11 AVG_Gauge32_11 triFactOutsideAirTempCNU 12 AVG_Gauge32_12 triFactOutsideEnthalpyJKGNU 13 AVG_Gauge32_13 triFactOutsideHumidityPctNU 14 AVG_Gauge32_14 triFactPowerUsageWNU 15 AVG_Gauge32_15 triFactPreheatValvePctNU 16 AVG_Gauge32_16 triFactReheatValvePctNU 17 AVG_Gauge32_17 triFactReturnAirCO2PPMNU 18 AVG_Gauge32_18 triFactReturnAirTempCNU 19 AVG_Gauge32_19 triFactReturnCoolantTempCNU 20 AVG_Gauge32_20 triFactReturnFanOutputPctNU 21 AVG_Gauge32_21 triFactSteamValvePctNU 22 AVG_Gauge32_22 triFactSupplyAirTempCNU 23 AVG_Gauge32_23 triFactSupplyAirTempSPCNU 24 AVG_Gauge32_24 triFactSupplyCoolantTempCNU 25 AVG_Gauge32_25 triFactSupplyCoolantTempSPCNU 26 AVG_Gauge32_26 triFactSupplyFanCurrentAmpNU 27 AVG_Gauge32_27 triFactSupplyFanOutputPctNU 28 AVG_Gauge32_28 triFactSupplyRelHumiditySPPctNU 29 AVG_Gauge32_29 triFactZoneRelHumidityPctNU 30 AVG_Gauge32_30 triFactZoneTempCNU 31 LAT_Short_String_01 triFactEconomizerModeNU 32 LAT_Short_String_02 triFactExhaustFanStatusNU 33 LAT_Short_String_03 triFactOccupiedCommandNU Page 20 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data 34 • LAT_Short_String_04 triFactSupplyFanStatusNU There must be an attribute group named K79_BMS_DEVICE_SUMMARY. That attribute group must contain the following attributes: • Device_Name - this should be defined as a string with a length of 400. This will be populated with the name of the device as it is known in the building management system. For example, B002-AHU01. • Device_Type – this should be defined as a string with a length of 400. This will be populated with a device type that is recognized by the TREES components. Valid values are: AHU, Chiller, Meter. • Device_SubType - this should be defined as a string with a length of 400. This is currently not used by TREES, but is important if in the future you want to filter out information based on subtype, for example if you develop a rule that is only appropriate for roof-mounted AHUs, then setting this value to “roof” would enable that filtering. • Nameplate_ID – this should be defined as a string with a length of 150. This will be populated with the Nameplate ID as it exists in the TRIRIGA asset record for this asset. For example, AHU-0001 • Counter64_01 – this should be defined as a Numeric, 64 bit, Purpose = Counter, Scale Decimal adjustment = 1. This will be populated with a numeric value that is monotonically non-decreasing, such as energy from a meter. • Gauge32_01 - this should be defined as a Numeric, 32 bit, Purpose = Gauge, Scale Decimal adjustment = 1. This will be populated with a numeric value. • Gauge32_02 – same as Gauge32_01. • Gauge32_03 - same as Gauge32_01 • Gauge32_04 - same as Gauge32_01 • Gauge32_05 - same as Gauge32_01 • Gauge32_06 - same as Gauge32_01 • Gauge32_07 - same as Gauge32_01 • Gauge32_08 - same as Gauge32_01 • Gauge32_09 - same as Gauge32_01 • Gauge32_10 - same as Gauge32_01 • Gauge32_11 - same as Gauge32_01 • Gauge32_12 - same as Gauge32_01 • Gauge32_13 - same as Gauge32_01 • Gauge32_14 - same as Gauge32_01 • Gauge32_15 - same as Gauge32_01 Page 21 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data • Gauge32_16 - same as Gauge32_01 • Gauge32_17 - same as Gauge32_01 • Gauge32_18 - same as Gauge32_01 • Gauge32_19 - same as Gauge32_01 • Gauge32_20 - same as Gauge32_01 • Gauge32_21 - same as Gauge32_01 • Gauge32_22 - same as Gauge32_01 • Gauge32_23 - same as Gauge32_01 • Gauge32_24 - same as Gauge32_01 • Gauge32_25 - same as Gauge32_01 • Gauge32_26 - same as Gauge32_01 • Gauge32_27 - same as Gauge32_01 • Gauge32_28 - same as Gauge32_01 • Gauge32_29 - same as Gauge32_01 • Gauge32_30 - same as Gauge32_01 • Short_String_01 - this should be defined as a string with a length of 50. This will be populated with a string value. • Short_String_02 – same as Short_String_01 • Short_String_03 - same as Short_String_01 • Short_String_04 - same as Short_String_01 If you are going to write your own warehouse-to-staging table ETL, then you can name the attribute groups and attributes whatever you desire. While having attribute names that match the TRIRIGA property they are associated with is simpler, beware that doing so means extending the data model, i.e. adding a new attribute, will require rebuilding the agent. 6 Product documentation TBD Page 22 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data ® © Copyright IBM Corporation 2013 IBM United States of America Produced in the United States of America US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation North Castle Drive Armonk, NY 10504-1785 U.S.A. The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PAPER “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you. This information could include technical inaccuracies or typographical errors. Changes may be made periodically to the information herein; these changes may be incorporated in subsequent versions of the paper. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this paper at any time without notice. Any references in this document to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk. IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing IBM Corporation 4205 South Miami Boulevard Research Triangle Park, NC 27709 U.S.A. Page 23 of 24 TRIRIGA Real Estate Environmental Sustainability: Integrating data All statements regarding IBM's future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only. This information is for planning purposes only. The information herein is subject to change before the products described become available. If you are viewing this information softcopy, the photographs and color illustrations may not appear. Trademarks IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml. Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. UNIX is a registered trademark of The Open Group in the United States and other countries. Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates. Other product and service names might be trademarks of IBM or other companies. Page 24 of 24
© Copyright 2024