1. Roland Smith
  2. PowerBuilder
  3. Wednesday, 24 October 2018 13:59 PM UTC

My application runs on SQL Anywhere or SQL Server. I connect to SQL Anywhere via ODBC and SQL Server by ODBC or OLE DB (user's choice).

I am running into a problem where text columns are cut off at 32768. For SQL Server via ODBC I am able to override the limit by creating pbodb###.ini with this:

[Microsoft SQL Server]
PBMaxTextSize='1024000'

For OLE DB, 1024000 is the default length. The problem I am having is that the documentation says nothing about how to get past this limitation for SQL Anywhere.

For SQL Anywhere I tried casting it as binary and returning the data with selectblob. That seems to work so the limitation with SQL Anywhere seems to be only with text. The problem is that I may or may not have to convert the data from Ansi when converting from blob to string.

How do I overcome the text length limit with SQL Anywhere?

If I have to use blob, how do I tell if the data is Ansi or Unicode?

Olan Knight Accepted Answer Pending Moderation
  1. Wednesday, 24 October 2018 14:58 PM UTC
  2. PowerBuilder
  3. # 1

The standard way to deal with massive data blocks is to cut them up into more bite-sized pieces.

If your piece of data is 2 million bytes, then you read it in at 32765 bytes at a time. Reassamble the data at the target end of the application if that is required.

 

Olan

 

 

Comment
  1. Roland Smith
  2. Wednesday, 24 October 2018 15:18 PM UTC
This is a pre-existing table out of my control. It is the sysprocedure table. I am pulling the syntax of stored procedures and functions.
  1. Helpful
  1. Brad Mettee
  2. Wednesday, 24 October 2018 15:47 PM UTC
Expanding on Olan's idea, can you make a loop that reads portions of the syntax using substr and re-assemble it in a string in PB?

  1. Helpful
There are no comments made yet.
Chris Pollach @Appeon Accepted Answer Pending Moderation
  1. Wednesday, 24 October 2018 15:31 PM UTC
  2. PowerBuilder
  3. # 2

Hi Roland;

  Are your super large SA text columns defined as "Long VarChar" vs just "VarChar"?

Regards ... Chris

Comment
  1. Brad Mettee
  2. Wednesday, 24 October 2018 15:43 PM UTC
Chris,

I tested this out with a known long varchar column using a space(40000) insert. If I select length(colname) from table, it's 40k, if I select into string var and get it's length, it's 32768. It's very much a limitation of PB and not the column.
  1. Helpful
There are no comments made yet.
Olan Knight Accepted Answer Pending Moderation
  1. Wednesday, 24 October 2018 16:01 PM UTC
  2. PowerBuilder
  3. # 3

Brad is exactly right - that's how you read massive blocks of data: in a loop. You are done with each individual block when you reah the EOL marker or if you know the length of the data, when you retrieve the last block.

Olan

Comment
There are no comments made yet.
Roland Smith Accepted Answer Pending Moderation
  1. Wednesday, 24 October 2018 17:00 PM UTC
  2. PowerBuilder
  3. # 4

Here is the exact code I have currently:

SELECTBLOB CAST(proc_defn AS LONG BINARY) INTO :lblb_source 
      FROM sys.sysprocedure
WHERE object_id = :al_objid;
choose case sqlca.SQLCode
case 0
Return String(lblb_source, EncodingAnsi!)
case 100
Return "Source code not found!"
case else
Return sqlca.SQLErrText
end choose

This returns everything as the limit is only on text. As you see I am converting from Ansi to Unicode. Is there a possibility that the source code may be Unicode?

I have tested with  'EAS Demo DB V126' and 'EAS Demo DB V126 Unicode' and the source code comes as Ansi in both databases.

 

Comment
  1. Roland Smith
  2. Wednesday, 24 October 2018 17:18 PM UTC
Since the column proc_defn is defined as LONG VARCHAR, it must always be Ansi. If it were NVARCHAR it would be UTF-8. That is according to the documentation on datatypes. There isn't anything specific to the sysprocedure table that I can find.
  1. Helpful
  1. Chris Keating
  2. Wednesday, 14 November 2018 20:32 PM UTC
SQL Anywhere stores bytes and uses the character set encoding to interpret the bytes. The PowerBuilder sample UNICODE database use UTF8BIN as the collation sequence (AKA the rules for sorting characters) and UTF-8 as the character set encoding for CHAR based columns (SQL Anywhere does not distinguish between char and varchar). That means, the UNICODE demo database can have a procedure definition with UTF-8 characters that are not mapped to an ANSI character. The result in that case is that you cannot assume ANSI! or UNICODE! for the value returned.
  1. Helpful
There are no comments made yet.
  • Page :
  • 1


There are no replies made for this question yet.
However, you are not allowed to reply to this question.