TIBCO Cloud Bus – At a glance and more….

In the below screenshot, look at the URL closely and you will notice the highlighted Amazon AWS.
Yes – this is the TIBCO Classic Administrator hosted on TIBCO Cloud Bus which leverages Amazon Web Services for Its cloud computing platform. If this catches your attention – read on…..


TIBCO Cloud Bus is a subscription-based Integration Platform-as-a-Service (iPaaS). What? Another “As-A-Service” buzzword?  Simply put – iPaaS allows integration between cloud to cloud applications and also between cloud to on-premises applications. Cloud to on-premises application integration is important  because financial services companies which need to follow many security and regulatory compliances might need to keep some applications in house on as well as want to keep other applications hosted on TIBCO Cloud bus with seamless integration between on premises and cloud apps.

Hosting the apps on cloud means you don’t need your IT infrastructure team to set up an environment for you to deploy your application. In TIBCO cloud bus, you can provision your own TIBCO environment in really really short time before you quickly deploy your applications.
TIBCO Cloud bus can increase/decrease the number of machines that sit behind the services. It knows what the load on the application is and then automatically increase/decrease the horsepower required for the application.

For the connectivity from the TIBCO Cloud Bus to on premise TIBCO infrastructure , TIBCO Cloud Bus has to connect through VPN gateway from the public cloud to company’s datacenter. This would involve setting up VPN tunnel at company’s end which may involve company’s IT as well.

At the time of writing this, I have two areas to be research in TIBCO Cloud Bus. Will edit this post when I have done some more research – hopefully with some answers.
1) Cost Effectiveness
How cost effective is TIBCO Cloud bus VS on-premises TIBCO setup over a longer period of time? Why an organization won’t continue to grow on in-control on-premises infrastructure to host TIBCO services rather than using TIBCO cloud bus If it is not saving costs over a period of time. Need some case studies and statistics.

2) Security
How TIBCO Cloud Bus addresses the security concerns when It comes to hosting all the apps on the cloud OR integration between on-premises applications and cloud applications? Need some case studies.
All said and done, IMHO, we must acknowledge that cloud computing NOT just a buzzword anymore now. Its for real and here to stay. So when TIBCO, one of the leaders in integration and middleware industry has something to offer in this space, we just need to keep an eye on it – if not necessarily adopting the offering immediately.

If you are already bored of reading, try TIBCO Cloud Bus yourself (30 days trial) – If you need help in that, I have some help below. I have steps-by-step instructions on how to set up a TIBCO Cloud Bus environment below. Good Luck and Happy Learning!

Setting up a TIBCO Cloud Bus environment

Note :- For below steps,use Mozilla Firefox for browsing the Cloud Bus If you can. – I have seen issues in IE/Chrome.

1) Go to cloudbus.tibco.com.


2) Enter your TIBCO Access Point credentials. If you are not registered there, register at tap.tibco.com and use the same login and password here.


3) If you have the TIBCO BW installed in your machine. You already have the development environment and you can skip this step. If you don’t, you can download the installers. Click on Develop > TIBCO Cloud Bus Designer (Windows 64-bit) [If you have 64-bit windows installed]

The TIBCO Cloud Business Designer has following in it.


a) Extract the zip file.
b) Open the silent file, Install-Designer.silent, in a text editor.
c) Review the license files in the license folder. If you accept the licenses, change the <acceptLicense> entry key to true in the silent file.
d) Change the <installPackageDirectory> entry key to your extracted temporary folder: C:\temp\TIB_tcb-designer_1.0.0_win_x86_64
e) Optionally update the values for the <installationRoot> and <environmentName> keys.
f) Save the silent file.
g) Run the installer: Install-Designer.cmd
h) After the installation is complete, start TIBCO Designer from the Windows Startup menu: Choose All Programs > TIBCO > Cloud Environment > TIBCO Designer 5.8.

4) Click on the deploy button to see the highlighted “Skyway” button.


5) Click on “Cloud Bus Starter Template” link to provision the BW, EMS and admin stack.


6) See more information on the stack we are going to provision

clip_image024 clip_image026

7) Clicking on “Proceed with Provisioning” will ask you to complete the required fields.


8) Click on Edit


9) I put the below values and click on Apply Button. Then click on the “Proceed with Provisioning” button.



Stack Name


TIBCO Domain Name


TIBCO Domain User


TIBCO Domain password


EMS user


EMS password



10) Initializing stack


11) Click Start to initialize the stack


12) Stack provisioning started


13) Stack is running


14) If you click on the “Tibco Admin URL” link above, the Tibco Admin is opened in a separate window.
That’s right – this TIBCO admin is running on cloud.

15) That’s it – you can deploy any TIBCO App to TIBCO Cloud bus as just like you did on your on-premises infrastructure – Just like I did – shown in the below screenshot.

, ,

Leave a comment

Unable to deploy Tibco BW application after configuring Tibco BW Process Monitor – ERROR – [TRA-000000] StreamGobbler(ERROR) : data = java.lang.NoClassDefFoundError: com/tibco/processmonitor/client/run Caused by: java.lang.ClassNotFoundException: com.tibco.processmonitor.client.run

As a part of Tibco BusinessWorks Process monitor, one needs to add following properties to the bwengine.tra file in the Tibco BusinessWorks machine.

#BW ProcessMonitor properties

Added the above and after that I couldn’t deploy any BW application in our Tibco Administrator. The tsm.log showed the below error each time I attempted to deploy an application.

[TRA-000000] StreamGobbler(ERROR) : data = java.lang.NoClassDefFoundError: com/tibco/processmonitor/client/run Caused by: java.lang.ClassNotFoundException: com.tibco.processmonitor.client.run [TRA-000000] StreamGobbler(ERROR) : data = java.lang.NoClassDefFoundError: com/tibco/processmonitor/client/run Caused by: java.lang.ClassNotFoundException: com.tibco.processmonitor.client.run at java.net.URLClassLoader$1.run(Unknown Source)  at ava.net.URLClassLoader$1.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source)

The first line of the list of properties we added to the bwengine.tra has following :-
Of course – It was
bwengine was not to find that class. Started looking into which jar would contain this file and It turns out to be bwpm.jar which comes as a part of BWPM install package.


Adding that bwpm.jar to C:\tibco\bw\5.10\lib resolved the issue and I could deploy BW apps to BW classic admin without any issues.


Leave a comment

Tibco BusinessWorks – BW-JDBC-100034 "Configuration Test Failed. Exception [com.microsoft.sqlserver.jdbc.SQLServerException] occurred. com.microsoft.sqlserver.jdbc.SQLServerException: Connection reset


BW-JDBC-100034 “Configuration Test Failed. Exception [com.microsoft.sqlserver.jdbc.SQLServerException] occurred. com.microsoft.sqlserver.jdbc.SQLServerException: Connection reset

<ns1:JDBCSQLException xmlns:ns1 = “http://schemas.tibco.com/bw/plugins/jdbc/5.0/jdbcExceptions“>
<msg>”JDBC error reported: (SQLState = 08S01) – com.microsoft.sqlserver.jdbc.SQLServerException: Connection reset ClientConnectionId:396457ef-182e-4ddb-8b50-57541ce45b3b”</msg>
<detailStr>com.microsoft.sqlserver.jdbc.SQLServerException: Connection reset ClientConnectionId:396457ef-182e-4ddb-8b50-57541ce45b3b</detailStr>

I saw this issue in a Tibco BusinessWorks application using a SQL JDBC Driver 4.0 to connect to a SQL server 2008 R2 server. However, I believe that this could happen with any application written in Java/.NET using JDBC driver 4.0 connecting to SQL server 2008 R2 server.

Solution :- Check If at least Microsoft® SQL Server® 2008 R2 Service Pack 2  is installed on the SQL server. If not, install it. If SP2 or Higher (e.g. SP3) is already installed and you still see the same issue – problem is something else.

How to check If at least SP2 is installed?
Connect to the server from the SQL Management Studio and execute the below query :-

You should see the SP2 as the part of result


Why :- See below :-
FIX: You cannot connect to SQL Server by using JDBC Driver for SQL Server after you upgrade to JRE 6 update 29 or a later version

What If you can not install SP2
We should always update latest service packs on the SQL server – not only for this issue. However, If you couldn’t and you had to live with current database server install – try using the below attribute in your TRA files.


If you are trying to test connection from the Tibco BusinessWorks Designer, add the above attribute in your <TIBCO_HOME>\designer\5.8\bin\designer.tra file.

If you want to deploy this change, add the above attribute in your <TIBCO_HOME>\domain\<Domain Name>\application\<Application>\<Application>.tra file and restart the service instances of the application.

This same issue does not happen when you connect to Sql server 2005. That’s weird! Sounds like somebody broke something in Sql 2008 R2 and fixed it in SP2.

Additional information
Version number for SQL server in RTM and service packs.
[Source :http://sqlserverbuilds.blogspot.com/]

, ,

1 Comment

Sql Server job – Cannot insert the value NULL into column ‘owner_sid’, table ‘msdb.dbo.sysjobs’

I needed to script out a Sql job one Sql server (server1) and use the same script to create the same job on a different Sql server machine(server2). So, I scripted out the job from Server1 and ran the same on the Server2 and got the below error :-

Cannot insert the value NULL into column ‘owner_sid’, table ‘msdb.dbo.sysjobs’

Looking at more closely on the script, I noticed that the the script I created from the job on the Server1 also has the login which I used to login to Server1. This login is different from the one I was using to run the script on server2. See the highlighted login in the below screenshot.


Once I changed the script to put the login I am using to run the script on server2, the script ran fine and the job got created in the Server2.


, ,


Maximum connections exceeded!!!! – How to know who are logged on to a server and not letting you in?

We all hate to see this message when we want to login to the server :-
“The terminal server has exceeded the maximum number of allowed connections.”


All you need is the PsLoggedOn utility from sysInternals (now owned by Microsoft). Another reason we should all love Mark Russinovich and SysInternal tools.
Just follow the steps :-

  1. Download PSTools from here.
  2. Extract the zip file to a folder.
  3. Open command prompt and navigate to the extracted folder.
  4. Run the following command. Keep the \\ and replace the MachineName with the fully qualified name of the machine where you want to see who are currently logged on.

5.   You will see who are logged on to that machine.

psloggedon \\MachineName 

This saves you sending emails to a group to find out who are logged into the server. Rather – because you know who are logged in, you would ping/email those specific users, asking them to log off.


Leave a comment

2013 in review

The WordPress.com stats helper monkeys prepared a 2013 annual report for this blog.

Here’s an excerpt:

The concert hall at the Sydney Opera House holds 2,700 people. This blog was viewed about 9,200 times in 2013. If it were a concert at Sydney Opera House, it would take about 3 sold-out performances for that many people to see it.

Click here to see the complete report.


Leave a comment

SQL Server–Change Data Capture [CDC] for “Watching” database tables for DML operations

In scenarios when we want to watch tables for any inserts, updates and deletes, we implement triggers. Triggers – not only needs database development effort and needs to be correctly written, It places locks on tables and slows things down.

In SQL Server 2008, Microsoft introduced a new capability called “Change Data Capture” [CDC] to watch and track changes on tables for any inserts, updates and deletes. This requires almost no database development effort and is more efficient than triggers. A very nice thing about CDC is that It makes use of transaction log which has all the data about any changes made to the database alreadySo why reinvent the wheel?

Basically, first you enable CDC on the database. Then enable CDC on the table (e.g. Account) you want to watch which will automatically create the a change tracking table (“Account_CT”) for the watched table. Any changes in your watched table (e.g. Account) will get recorded in the change tracking table (e.g Account_CT) and you can use the tracking table for all your queries.

“Talk is cheap. Show me the code.” – Linus Torvalds

1. Preparing sample data

/*Create sample database*/
USE [master]
IF  EXISTS (SELECT name FROM sys.databases WHERE name = N’ChangeDataCaptureTest’)
DROP DATABASE [ChangeDataCaptureTest]
/****** Object:  Database [ChangeDataCaptureTest]    Script Date: 10/23/2013 17:25:04 ******/
CREATE DATABASE [ChangeDataCaptureTest]

/*Create the sample table – This is the table we will be watching for inserts, updates and deletes*/
USE ChangeDataCaptureTest
    Id INT Primary Key IDENTITY(1,1),
    [Description] VARCHAR(500),
    [Active] BIT

2. Enable CDC on the database

This looks scary, but all it is doing it is checking an already existing flag in sys.databases table in the master database. If you execute the following, you can see in which database CDC is currently enabled. In the below example, It is not enabled in any of the database:-
USE master
SELECT [name], database_id, is_cdc_enabled 
FROM sys.databases      


Now we can enable CDC on our sample database.
USE ChangeDataCaptureTest
EXEC sys.sp_cdc_enable_db

If you execute the following, you can see the “is_cdc_enabled” column for our sample database is enabled:-
USE master
SELECT [name], database_id, is_cdc_enabled
FROM sys.databases


3. Enable CDC on the table you want to watch for Insert/Update/Delete

Now we need to enable the CDC on the table we want to watch.

USE ChangeDataCaptureTest
EXEC sys.sp_cdc_enable_table
@source_schema = N’dbo’,
@source_name   = N’Account‘,
@role_name     = NULL

When we execute the above, we see two Sql jobs created and started automatically.



This job watches the table “Accounts” and put changes in the tracking table Account_CT
cdc.ChangeDataCaptureTest_cleanup –
This job cleans up the tracking table Account_CT and can be scheduled as per the requirement.

At this point If we query the sys.tables :-
USE ChangeDataCaptureTest
SELECT [name], is_tracked_by_cdc
FROM sys.tables


4. Testing the results

Let us insert/update/delete data in the watched table [Account] and see the tracked changes in the [Account_CT] table.

Insert Operation
USE ChangeDataCaptureTest
VALUES (‘Test’, 1)

Select to verify results
USE ChangeDataCaptureTest
SELECT * FROM cdc.dbo_Account_CT


Value for “__$operation” column is 2 indicates “Insert”. We can see the values in the columns “Description” and “Active”.

Update operation
USE ChangeDataCaptureTest
UPDATE Account
SET Active= 0
WHERE Id = 1
Select to verify the results
USE ChangeDataCaptureTest
SELECT * FROM cdc.dbo_Account_CT

Value for “__$operation” column :-
3 = update (captured column values are those before the update operation).
4 = update (captured column values are those after the update operation)

DELETE Operations
USE ChangeDataCaptureTest
DELETE Account
WHERE id = 1
Select to verify results
USE ChangeDataCaptureTest
SELECT * FROM cdc.dbo_Account_CT


Value for “__$operation” column is 2 indicates “Delete”.

Time based search on table changes
Need to see changes in a table based on given timestamp? No problem. When we enabled the table for change tracking, It also added a system table named “cdc.lsn_time_mapping” which has all the transaction with the timestamp.
Just join the change tracking table (Account_CT) with the system table “cdc.lsn_time_mapping” table on transaction id (start_lsn) and have the transaction filter criteria on the same.


USE ChangeDataCaptureTest
SELECT B.*, A.tran_begin_time, A.tran_end_time 
FROM cdc.lsn_time_mapping A
INNER JOIN cdc.dbo_Account_CT B
ON A.start_lsn = B.__$start_lsn


Note :- In CDC, there’s no way to trace the user who causes each transaction.

, , ,

Leave a comment

SqlDbType.Structured [another gem in ADO.NET] and Bulk insert using Table Valued Parameters

Use case
You want to insert multiple rows (say from 2 to 999 rows) from your .NET application to a database table.
You are using .NET Framework 3.5 or above with Sql Server 2008 or above.

Insert one row at a time
As you can imagine, this would have a performance hit because of too many connection getting opened/closed – more chatty
Use a CSV of rows
This approach is more chunkier than the above approach. Overall, below is the approach :-
– Create a comma separated string of rows from the application
– Send CSV to a stored procedure from application
– Stored procedure would make use of a UDF to parse the CSV to a table variable
– Stored procedure would insert data from the table variable to the actual table
Use SqlDbType.Structured
The above approach solves the problem of the application being too chatty to the database. However, It’t not elegant and involve too much of “manual” parsing of string.

In .NET Framework 3.5 and onwards, SqlCommand can make use of another parameter type named SqlDbType.Structured which enables a .NET application to send a DataTable (yes, an “System.Data.DataTable” object) directly to the stored procedure which can be directly used as a table inside the stored procedure as If It was a table in database.

In the below example we will send a DataTable of email addresses from .NET application to a database stored procedure which would insert data directly from this table to the actual table.

Database changes :-
/*Create a user defined table type which will hold the contents of the DataTable passed from the application.
This will be used as a Table Valued Parameter.

CREATE TYPE [dbo].[EmailAddressList] AS TABLE
[EmailAddress] [NVARCHAR](100) NULL

/*Create the actual table to which we will insert data from the DataTable*/
CREATE TABLE EmailAddressDetails
EmailAddress NVARCHAR(100),
CreatedOn DateTime DEFAULT GetDate()

/*Create the stored procedure which will be called from the application*/
CREATE PROCEDURE EmailAddresses_InsertBatch
@EmailAddressBatch [EmailAddressList] READONLY
INSERT INTO EmailAddressDetails (EmailAddress)
SELECT E.EmailAddress FROM @EmailAddressBatch E

Application changes
//Function to create a DataTable with dummy email addresses
//The DataTable created here should match the schema of the User defined table type created above.

private DataTable CreateEmailAddressDataTable()
DataTable emailAddressDT = new DataTable();
emailAddressDT.Columns.Add(“EmailAddress”, typeof(string));
int emailAddrressCount = 100;
for (int i = 0; i < emailAddrressCount; i++)
DataRow row = emailAddressDT.NewRow();
row[“EmailAddress”] = i.ToString() + “.something@lpl.com”;
return emailAddressDT;

//Function to call the stored procedure with DataTable
private void AddEmailAddressToDb()
DataTable dataTable = CreateEmailAddressDataTable();
string connectionString = “Server=YourServerName;Database=YourDatabaseName;UserId=ashish;Password=ashish;”;
using (SqlConnection connection = new SqlConnection(connectionString))
using (SqlCommand command = new SqlCommand())
command.Connection = connection;
command.CommandText = “EmailAddresses_InsertBatch”;
command.CommandType = CommandType.StoredProcedure;
var param = new SqlParameter(“@EmailAddressBatch”, SqlDbType.Structured);
                   param.TypeName = “dbo.EmailAddressList”;
param.Value = dataTable;

The above line highlighted in green is the gem in ADO.NET. 🙂 When we send DataTable the stored procedure, It populates the user defined table type which we can directly use in the stored procedure – inserting from it to the actual table in this case. No parsing of CSV in UDF – takes away big pain when you have a complex structure.
Note :- Microsoft recommends this to be used when you are inserting less than 1000 rows. For more rows, consider using SqlBulkCopy.


1 Comment

From “Yes”SQL to NoSQL with Raven Db

What is “Yes”Sql?

Question :- Do you know what we can use to store all employees data?
Answer :- Yes! I would use a RDBMS (Relational Database Management System) software like Sql Server,
MySql, Oracle etc. I would store all your employee data in different database tables with relationships between them.

Question :- Do you know how can we can access and manage employees data?
Answer :- Yes! I would use a query language named SQL (Structured Query Language) to query and manage data.

If you have been also answering “Yes” to the above questions, you have been doing “Yes”SQL. I have been doing the “Yes”SQL for quite some time and feeling like the frog in its well of RBMS’s  without even realizing the outside world has become better and should be explored.

source :- http://bit.ly/VVxmMU

What is NoSql

Stop being the “frog in the well”!! Take the red pill and find out how deep the rabbit hole goes.image

What is NoSQL

As we have been doing “Yes”SQL for quite some time, we will first take a look at what we have been doing for
storing and retrieving data.

Following is very simplified employee database tables.


Add some data:-
    INSERT INTO Department(Id, Name) VALUES(1, ‘Accounts’)  
    INSERT INTO Department(Id, Name) VALUES(2, ‘Engineering’)  
    INSERT INTO Employee (ID, Name, DepartmentID) VALUES (1, ‘Ashish’, 1)  
    INSERT INTO Employee (ID, Name, DepartmentID) VALUES (2, ‘John’, 2) 

If we want to retrieve name of the employee and the department in which he/she works using the employee id,
need to join the two tables :-
SELECT E.Name AS EmployeeName, D.Name AS DepartmentName FROM Employee E
INNER JOIN Department D ON  E.DepartmentID = D.Id
WHERE E.Id = 1

Why we needed to join the tables? because all the information for that particular employee is not stored at one table. This is the main characteristic of a RDBMS and this is first thing we need to get off our minds in NoSQL.
Now, there are different types of NoSQL databases – Raven DB, Mongo DB etc. However, in all of them, the basic element is same – No relationship!
In Raven Db, the all data for this employee would be stored in a “document”.
          “Name”: “Ashish”,  
            “Id”: 1,  
            “Name”: “Accounts”  

As you see – all data (which includes department details as well) for employee with id “1” is stored in one
place – a “document”, in RavenDB terms.  This is similar to a “row” in the RDBMS except the fact that data is not distributed in different tables, rather all data is at one place – in a document. For each employee, there would be be one document.

More on this next.

NoSQL using RavenDB

What is Raven Db

Raven DB is a document database.  It has following characteristics :-

  • Non- Relational
  • All data is stored and represented in JSON
  • There is no schema
  • Transactional


I downloaded the last stable build of Raven DB and extracted to my local “D:\Ashish\Research\RavenDB” folder. Look for /Server/RavenServer.exe.config and change the Anonymous access to “All”


Run the Start.cmd as an administrator.image

It should open a very nice looking management studio for Raven Db (performing function “similar” to SQL server management studio). It will ask you to create a new database as you don’t have any. Put a name for the database.image

The new database is created :-image


Client application to access and manage data in the Raven DB
Adding the RavenDB client to the client application via Nuget Package managerimage

Below is the client code which when executed creates one document per entity (in this example, a company). Notice that a DocumentStore is similar to a connection. We can use the same connection to create multiple documents. However, each document is bound to a session and that’s why It needs to be disposed before creating a new one. Also notice the RabenDb displays the documents created in the server admin browser.


, ,

Leave a comment

Connecting to MySql database and fetching records using JDBC connection in TIBCO BusinessWorks

This looks as naïve as connecting to a database from .NET using standard connection class. But, hey, that was interesting/exciting as well first time when you did that. In this example, we will connect to a MySql database from TIBCO Businessworks, fetch records and write the first record to a text file – as simple as that.

Create an empty project in TIBCO BusinessWorks designer:-

Save the project with the directory path :-
Your project pane would look something like this :-
Now, we need to add a JDBC connection. A JDBC Connection is a resource in TIBCO designer. Therefore for better organization, we create a “Resources” folder.


Add the JDBCConnection to the “Resources” folder as shown below.


JDBC Connection would be added as shown below and we need to select the driver and the connection string for the mysql database. Don’t forget the username and password. Click “Test Connection” to make sure the connection succeeds.


When you click “Test Connection”, you might see the below error :-

“BW-JDBC-100033 “Configuration Test Failed. Failed to find or load the JDBC driver. jdbc:mysql://localhost:3306/Research”

This really means BusinessWorks cannot see the MySql driver required for the connection. This driver is basically a .jar file [mysql-connector-java-5.1.23-bin.jar] which is located in the MySql installation directory. For my installation the directory is “C:\Program Files (x86)\MySQL\Connector J 5.1.23\mysql-connector-java-5.1.23-bin.jar”. All I needed to do is to copy that jar file to the TIBCO installation directory > Lib folder. For me, I needed to copy to “C:\tibco\bw\5.10\lib” directory. Save the project, close the designer and reopen the project. Now on the JDBC Connection, click “Test Connection”.


Voilà!!! Got connected this time. However, I am pretty sure there must be a better way (may be setting the classpath for the jar file) than copying the jar file itself.

Add a new folder named “Activities” to the root folder.


Add a “Process Definition” from the Palettes tab and double click on it to see the start and end points.


Add “JDBC Query” from the Palettes tab. In the configuration pane of the JDBC query, select the JDBC Connection as shown below :-


Add the Sql statement to get data from the mySql table. My sql statement was :-

SELECT ID, Name FROM Product

Writing the database query results to text file
Create a blank text file named “test.txt” in your project directory.

Add “Write File” from the Palettes tab and place it after the JDBC query. Connect them using the clip_image019 (“Create transition” button) as shown below.


In the “Input” tab of the “Write File”, enter the name of the blank text file we created earlier.


Now we need to set the contents of the file. The contents of the file would be the first record from the resultset returned by the JDBC query. Keep the cursor in the “textContent” textfield and click on the yellow pencil button.


XPath Formula Builder would open. In this, drag the Functions > String >Concat to the XPath Formula text field on the right.


Drag “Data > JDBC Query > resultSet > Record > Id” onto the <<string1>> and Drag “Data > JDBC Query > resultSet > Record > Name” onto the <<string2>>


Since we are getting only one record, change the following :-

concat($JDBC-Query/resultSet/Record/Id, $JDBC-Query/resultSet/Record/Name)

to this (index = 1 as XPath has starting index as 1):-


Click “Apply” on the XPath Formula Builder and then on the Input pane of the “Write File”. Save the project. Run the project and you would see the file updated with the first record from the database.


Leave a comment

Random Thoughts

The World as I see it

Simple Programmer

Making The Complex Simple

Ionic Solutions

Random thoughts on software construction, design patterns and optimization.

Long (Way) Off

A tragic's view from the cricket hinterlands