1. Ivan Vidmar
  2. PowerBuilder
  3. Monday, 4 July 2022 07:59 AM UTC

Our software developer is using your PowerBuilder development software for development of our ERP system. We have a problem in our ERP system when transferring larger files into the database. On the picture below you can see 34.000 sql statement executions when transferring a file of 100 MB. Because of so many sql executions the file transfer to the database takes about 2 minutes. This is almost not acceptable, such transfer should happen in less than 10 seconds.

Our developer says slow transfers are due to the technology available in the PowerBuilder environment. Is this the only technology available for transferring files or do you have some faster alternatives in PowerBuilder?
The database is SAP SQL Anywhere.

Thank you for your answer.

Attachments (1)
Andreas Mykonios Accepted Answer Pending Moderation
  1. Tuesday, 5 July 2022 11:46 AM UTC
  2. PowerBuilder
  3. # 1

I will mention that xp_read_file is still lot faster.

Andreas.

Comment
There are no comments made yet.
Andreas Mykonios Accepted Answer Pending Moderation
  1. Tuesday, 5 July 2022 11:41 AM UTC
  2. PowerBuilder
  3. # 2

And that's the result from my test with 3 consecutive blob updates (the file I used is 110.129.003 bytes):

Start: 1
Elapsed: 3"
Start: 2
Elapsed: 3"
Start: 3
Elapsed: 4"
Complete.

Follows the settings I did used to connect.

lnv_sqlca.DBMS = "ADO.Net"
lnv_sqlca.LogPass = '*********'
lnv_sqlca.LogId = "user"
lnv_sqlca.AutoCommit = False
lnv_sqlca.DBParm = "Namespace='System.Data.Odbc',DataSource='DSN'"

and the code I used to obtain previous result:

transaction lnv_sqlca
blob lbl_tmp
integer li_i, li_err
long ll_fileno
time lt_time1, lt_time2

lnv_sqlca = create transaction

lnv_sqlca.DBMS = "ADO.Net"
lnv_sqlca.LogPass = '*********'
lnv_sqlca.LogId = "user"
lnv_sqlca.AutoCommit = False
lnv_sqlca.DBParm = "Namespace='System.Data.Odbc',DataSource='DSN'"

connect using lnv_sqlca;

if lnv_sqlca.sqlcode <> 0 then
	messagebox("Error", lnv_sqlca.sqlerrtext, stopsign!)
	return
end if

ll_fileno = fileopen("C:\Downloads\SomeFile.zip", StreamMode!, read!, lockwrite!)

li_err = filereadex(ll_fileno, lbl_tmp)

fileclose(ll_fileno)

delete from dbo.contents2
using lnv_sqlca;

if lnv_sqlca.sqlcode <> 0 then
	messagebox("Error", lnv_sqlca.sqlerrtext, stopsign!)
	rollback using lnv_sqlca;
	return
end if

commit using lnv_sqlca;

for li_i = 1 to 3
	yield()
	mle_log.text = mle_log.text + "Start: " + string(li_i) + "~r~n"
	lt_time1 = time(now())
	
	insert into dbo.contents2 (content_pid) values (:li_i)
	using lnv_sqlca;
	
	if lnv_sqlca.sqlcode <> 0 then
		messagebox("Error", lnv_sqlca.sqlerrtext, stopsign!)
		rollback using lnv_sqlca;
		return
	end if
	
	updateblob dbo.contents2
	set stored_content = :lbl_tmp
	where content_pid = :li_i
	using lnv_sqlca;
	
	if lnv_sqlca.sqlcode <> 0 then
		messagebox("Error", lnv_sqlca.sqlerrtext, stopsign!)
		rollback using lnv_sqlca;
		return
	end if
	
	commit using lnv_sqlca;
	lt_time2 = time(now())
	SecondsAfter (lt_time1, lt_time2)
	mle_log.text = mle_log.text + "Elapsed: " + string(SecondsAfter (lt_time1, lt_time2)) + "~"~r~n"
next

mle_log.text = mle_log.text + "Complete.~r~n"

disconnect using lnv_sqlca;

So you can ask your developer to try something similar.

I can't say why ODBC is so slow.

Andreas.

Comment
There are no comments made yet.
Andreas Mykonios Accepted Answer Pending Moderation
  1. Tuesday, 5 July 2022 11:33 AM UTC
  2. PowerBuilder
  3. # 3

Sorry but I think it's not the same. I see you are using oledb and ado.

But you do use the dsn to connect. I don't know if this would make a difference. I haven't tested.

Andreas.

Comment
There are no comments made yet.
Ivan Vidmar Accepted Answer Pending Moderation
  1. Tuesday, 5 July 2022 11:19 AM UTC
  2. PowerBuilder
  3. # 4

I have found a 10 years old piece of vbs code for adding a file into the database (see on the picture below). The configuration is the same (same database server, same odbc, …). Instead of adding the file with PowerBuilder application I am adding it with vbscript. Adding it with vbscript takes only 4 seconds and takes 275 sql statements during transfer (PowerBuilder application takes 2 minutes and 17416 sql statements).

That means SQL Anywhere 17 and ODBC are not the cause for slow transfers.  It depends on how it is made on the client side.

I will try to get the piece of code from our ERP software developer where BlobData is filled when adding files.

Thank you for your answers.

Comment
There are no comments made yet.
Andreas Mykonios Accepted Answer Pending Moderation
  1. Tuesday, 5 July 2022 06:27 AM UTC
  2. PowerBuilder
  3. # 5

And another think you must have in mind is that you will face similar limitations when reading this blob from database. It will be sent in packets. So this function may also be slow. But you may have the option to use xp_write_file and save your blob into the server.

Andreas.

Comment
  1. Chris Pollach @Appeon
  2. Tuesday, 5 July 2022 16:47 PM UTC
Hi Andreas ... For large blob data streams, I have found that setting the "Retrieve Blocking Factor" and / or "Packet Size" especially (depending on the DB Interface chosen) can be quite critical performance factor as well. For example:

ODBC: SQLCA.DBParm = "Async=1,Block=10,SQLCache=20"

SNC: SQLCA.DBParm = "Block=100,PacketSize=8192"

MSO: SQLCA.DBParm = "Block=100,PacketSize=8192,AppName='PB2022',Host='Appeon'"

etc



HTH

Regards ... Chris
  1. Helpful
There are no comments made yet.
Roland Smith Accepted Answer Pending Moderation
  1. Monday, 4 July 2022 18:12 PM UTC
  2. PowerBuilder
  3. # 6

You could compress the file locally before the UPDATEBLOB. Then decompress after SELECTBLOB.

You didn't say what version of PowerBuilder. PB 2019 added CompressorObject and ExtractorObject objects. If you are using an older version, you can use ZLib:

https://www.topwizprogramming.com/freecode_zlibwapi.html

Comment
  1. Miguel Leeuwe
  2. Tuesday, 5 July 2022 05:39 AM UTC
That's a good idea. Another thing you could do, is use UTF8 encoding for the blob instead of the default utf16-le.
  1. Helpful 2
There are no comments made yet.
Miguel Leeuwe Accepted Answer Pending Moderation
  1. Monday, 4 July 2022 16:26 PM UTC
  2. PowerBuilder
  3. # 7

[Edited] I've striked-through my answer, as I didn't realize that Andreas already said the exact same thing.

His solution of copying the files to the server where your database is and then running an UpdateBlob on the server seems like the best solution to me. It will let you win a LOT of speed.

If I remember well, long long time ago I read somewhere that TCP-packets are limited to 4KB. If you would update to a local database on your own computer I guess it will be a lot faster.

Here though - https://stackoverflow.com/questions/2613734/maximum-packet-size-for-a-tcp-connection - they claim it might be up to a maximum of 64KB, though it's mentioning that you'd rarely get that. 

Even if you would get 64KB in a single packet, it would still mean that your 100MB has to be send over the network in a loooooot of packets.

I think that that's the one and only reason why it's slow and why you are seeing 17416 packets of 4kb being sent.

regards,

MiguelL

Comment
There are no comments made yet.
mike S Accepted Answer Pending Moderation
  1. Monday, 4 July 2022 15:23 PM UTC
  2. PowerBuilder
  3. # 8

this sounds like a limitation of sql anywhere, or the coding used by your developer to load the blob. There can be additional factors including: network speed, database server hardware, database configurations, etc. 

Packet size is generally settable in the database connection configuration on the application side, and you typically need to match that with the database server configuration for best performance.  Setting it up to work as fast as possible with large blobs may slow down other more typical processes, so that may not be in your best interest.

 Without displaying what exactly you are doing means that we can only guess.

 

"such transfer should happen in less than 10 seconds"  

why do you think that?  in uploading a 100GB file to a cloud storage service via rest apis, it takes about 15-20 seconds.  this is over a 1 GB fiber connection.  Yes, a local database should be faster, but you seem to have a speed predetermined?

 

 

 

 

 

Comment
  1. Andreas Mykonios
  2. Tuesday, 5 July 2022 06:21 AM UTC
Hi Mike. He is mentioning that the file is 100 MB. I guess that the file is segmented to 3000 packets. That's why they see 34000 statements. But I'm not an expert.
  1. Helpful
There are no comments made yet.
Andreas Mykonios Accepted Answer Pending Moderation
  1. Monday, 4 July 2022 14:27 PM UTC
  2. PowerBuilder
  3. # 9

I believe that you want be able to improve speed using updateblob (I guess it's how your developer currently updates blob info in the database). 100 MB is lot of data, and this will pass through odbc. In my tests to update a 100 MB file in a local database it took me almost 45''. I don't think you have an option to improve this time, as 100 MB will pass over packets and each packet will have a maximum size which I think will be in best scenario 65535 bytes... But I may be wrong.

Sql anywhere has an extended stored procedure that can be used to write blobs (actually it's used to read files located on your server). But it will require some privilleges. The following example is from sql anywhere help file. This routine is pretty fast (less than a second in my tests for a 100 MB file).

UPDATE Products
SET Photo=xp_read_file( 'c:\\sqlany\\scripts\\adata\\HoodedSweatshirt.jpg' )
WHERE Products.ID=600;

But I don't know if this is an acceptable solution. If yes, you should give the appropriate privileges and try it. Read carefully the instructions. Also I want to mention that it can work only with local drives. So you will have to create a share and place the files you want to load to the database there.

Finally you may explore the possibility to use sql anywhere's input command. But this may be a hard task or even impossible in your case.

(I say "you" because you asked the question, but if you want to test any of the above you will have to forward this information to the developer).

Andreas.

Comment
There are no comments made yet.
Ivan Vidmar Accepted Answer Pending Moderation
  1. Monday, 4 July 2022 14:01 PM UTC
  2. PowerBuilder
  3. # 10

I do not have the access to the source code of the ERP system. Certainly, there are no web services involved. On the server side there is a database SAP SQL Anywhere on the client side it is PowerBuilder application. The communication is established through ODBC TCPIP port 2683. Everything is running in windows environment.

As I can see in the profiler BlobData is used. Look at the picture the statements from profiler during transfer of 72 MB file. It looks like the file is sliced into pieces (each ~ 4KB) and then transferred piece by piece through calling a function 17416 times.

Comment
There are no comments made yet.
Andreas Mykonios Accepted Answer Pending Moderation
  1. Monday, 4 July 2022 09:28 AM UTC
  2. PowerBuilder
  3. # 11

Hi.

You are asking for help in a problem where file updates to db take to long. Don't you think you should provide us the information of how your developer is doing that? There are to many ways to achieve such a result. From using blobs and updateblob, to using web services.

Andreas.

Comment
There are no comments made yet.
  • Page :
  • 1


There are no replies made for this question yet.
However, you are not allowed to reply to this question.
We use cookies which are necessary for the proper functioning of our websites. We also use cookies to analyze our traffic, improve your experience and provide social media features. If you continue to use this site, you consent to our use of cookies.