1. mike S
  2. PowerBuilder
  3. Sunday, 16 July 2023 15:58 PM UTC

Curious as to whether anyone has done any speed comparisons of using the JSON objects (parser, generator, package) vs using datastores.  Specifically, when using FIND over a range of rows in the datastore, vs the json methods to get/put data.

 

Before appeon, we needed a simple in memory storage of Key/values so we built an nvo that uses a datastore to store those values.  Now wondering if the new json objects would be faster. My guess is no, but i really have no idea.    The number of values that can be stored can be over 180,000 for some processing.

 

Sivaprakash BKR Accepted Answer Pending Moderation
  1. Monday, 17 July 2023 07:09 AM UTC
  2. PowerBuilder
  3. # 1

We tried to transfer data from SQL Server to PostgreSQL using both Datastore and JSON

System Used: Intel Core I7-7700 8GB RAM,
Software: PowerBuilder 2022, SQL Server 2012 Express, PostGreSQL 15
Table: Need to transfer 10 Tables [ 7 Master Data tables and 3 Transaction Data Tables ]

One particular transaction data table for which we tried to transfer using both datastore and JSON.  That particular table had 23 columns and 97821 rows.

To Replace:  Identity column in SQL Server to GUID in PostgreSQL by seeking data from corresponding master data table

We used u_json available in the community forum, to read JSON string.

Steps [ JSON ]

1.  Read data into a datawindow
2.  Export to JSON
3.  Call a NVO to do further processing with the above JSON as a parameter
4.  Retrieve the required master data into a datastore
5.  Load the JSON -        Much time taking process.  We tried and abandoned as we waiting for more than 20 min without success
6.  Read each row, find corresponding GUID value for the identity column, by using Find command in the master data datastore.
7.  Insert into a new datastore with all required values from JSON 
8.  Update the datastore.

With this load, the process didn't pass the loadstring command, with out wait for 20-30 min in different run.

So we replaced JSON with datastore and tried to process the same.   Same steps as above, with a change in Step 5.  Instead of reading from JSON, we read each data in the datastore and processed.    The transfer work got started immediately, as there was no loadstring command.

The process got slowed down once we reached around 5000 records.  With every addition of 1000 records, we could see its getting to slow down further.  IN 30 min, only 50000 records or so got transferred.

So we stopped that process too.  We didn't proceed with both, instead created a temp table in the database, and carried out the transfer.  The transfer of the said table took only 5 min.

Happiness Always
BKR Sivaprakash

 

Comment
  1. Chris Pollach @Appeon
  2. Monday, 17 July 2023 17:11 PM UTC
All automatically done for you using the Pipeline object! ;-)
  1. Helpful 1
  1. mike S
  2. Monday, 17 July 2023 17:51 PM UTC
it is fairly easy to roll your own as well if the Pipeline object is too restrictive.



  1. Helpful
  1. Sivaprakash BKR
  2. Tuesday, 18 July 2023 04:42 AM UTC
Chris,

Slow down is while transferring data from one datastore (or JSON) to another datastore. The control didn't reach the Datastore update statement.

We completely re-wrote the process of transfer using DBMS facilities, which was a way faster...

  1. Helpful 1
There are no comments made yet.
Andreas Mykonios Accepted Answer Pending Moderation
  1. Monday, 17 July 2023 06:19 AM UTC
  2. PowerBuilder
  3. # 2

Hi.

You mean 180.000 rows? If this is the case then based on documentation you should not use json objects.

Andreas.

Comment
  1. Benjamin Gaesslein
  2. Wednesday, 19 July 2023 06:50 AM UTC
Just out of curiosity, how do you store 180k key/valye pairs in 20k rows?
  1. Helpful
  1. Andreas Mykonios
  2. Wednesday, 19 July 2023 07:00 AM UTC
Hi Benjamin. Mike must consider as key values the following:

{

"Key1" : "Value1",

"Key2" : "Value2",

"Key3" : "Value3",

"Key4" : "Value4",

"Key5" : "Value5",

"Key6" : "Value6",

"Key7" : "Value7",

"Key8" : "Value8",

}

which is equivalent of course to:

{"Key1":"Value1","Key2":"Value2","Key3":"Value3","Key4":"Value4","Key5":"Value5","Key6":"Value6","Key7":"Value7","Key8":"Value8"}



This is a set of 8 key value pairs. Help mentions that data rows should not be more than 100000 (or 20 MB). But it's not clear if the above example is considered to be 1 or 8 data rows. In reality this is equivalent to one "record" with eight "columns".



Andreas.
  1. Helpful
  1. mike S
  2. Wednesday, 19 July 2023 21:14 PM UTC
^^^ yes that. so each 'row' is a key/value pair where the value is another json object of those 8 or so key/value pairs. i



n the datastore it is still just key/value columns, plus a parent object node id so i can link them together if i have multiple rows (objects). pretty much flattens out the hierarchy. I use a 2nd datastore to track the object/rows and includes index values to make the whole thing run a lot faster (index values used in datastore find).
  1. Helpful
There are no comments made yet.
Chris Pollach @Appeon Accepted Answer Pending Moderation
  1. Monday, 17 July 2023 00:41 AM UTC
  2. PowerBuilder
  3. # 3

Hi Mike;

  I have not done such a comparison but since the DWO & its execution is primarily written in Assembler ... I would put my money on the DS. Just my $0.02. ;-)

Regards ... Chris 

Comment
There are no comments made yet.
  • Page :
  • 1


There are no replies made for this question yet.
However, you are not allowed to reply to this question.