Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Send small blobs inline. #8318

Merged
merged 10 commits into from
Feb 5, 2025
Merged

Send small blobs inline. #8318

merged 10 commits into from
Feb 5, 2025

Conversation

hvlad
Copy link
Member

@hvlad hvlad commented Nov 14, 2024

The feature allows to send small blob contents in the same data stream as main resultset.
This lowers number of roundtrips required to get blob data and significantly improves performance on high latency networks.

The blob metadata and data is send using new type of packet op_inline_blob and new structure P_INLINE_BLOB.
The op_inline_blob packet is send before corresponding op_sql_response (in case of answer on op_execute2 or op_exec_immediate2), or op_fetch_response (answer on op_fetch).
There could be as much op_inline_blob packets as number of blob fields in output format.
NULL blobs and too big blobs are not sent.
The blob send as a whole, i.e. current implementation doesn't support sending of part of blob. The reasons - attempt to not over-complicate the code and the fact that seek is not implemented for segmented blobs.

Current, initial, implementation send all blobs that total size is not greater than 16KB.

The open questions is what API changes is required to allow user to customize this process:

  • allow to enable and disable inline blob sending
  • allow to set inline blob size limit
  • decide on what level should be applicable settings above: per-attachment, per-statement, etc
  • decide default and maximum values for inline blob size limit.

Also, will good to have but not required:

  • allow to set BPB in advance
  • allow to enable blob in-lining on per-field basis, if output format contains many blob fields.

This PR currently in draft state and published for the early testers and commenters.

@hvlad hvlad self-assigned this Nov 14, 2024
@hvlad hvlad marked this pull request as draft November 14, 2024 11:58
@aafemt
Copy link
Contributor

aafemt commented Nov 14, 2024

Why a new packet instead of sending them in the response message itself? IIRC response packets contain its own format so inline BLOBs can be described individually as strings and then transformed to cached BLOBs on client.

@AlexPeshkoff
Copy link
Member

Vlad, I suppose that content of op_inline_blob is cached by remote provider in order to serve requests for data in that blobs w/o network access. If yes - how long is data in that cache kept?

@hvlad
Copy link
Member Author

hvlad commented Nov 14, 2024

Vlad, I suppose that content of op_inline_blob is cached by remote provider in order to serve requests for data in that blobs w/o network access. If yes - how long is data in that cache kept?

Yes, sure. Cached blob is bound to the transaction object and will be released (what happens first):

  • at transaction end, or
  • when user opens the blob with non-empty BPB, or
  • user opens the blob with empty BPB and then closed it.

Note, in the case when user opens the blob with non-empty BPB, cached blob is discarded.

@AlexPeshkoff
Copy link
Member

Imagine RO-RC transaction which leasts VERY long (nothing prevents from keeping it open for client application lifetime). Would not such long life of cache be an overhead?

@hvlad
Copy link
Member Author

hvlad commented Nov 14, 2024

Imagine RO-RC transaction which leasts VERY long (nothing prevents from keeping it open for client application lifetime). Would not such long life of cache be an overhead?

It is supposed that cached blobs will be read by application.
Anyway, it will be good to have a way to set limit on blobs cache size, is it your point ?

@AlexPeshkoff
Copy link
Member

Telling true my first thought was that cache is very tiny - just blobs from last fetched row, but this appears inefficient when we try to support various grids.

First of all let's think about binding cache not to transaction but to request/statement. It's hardly typical to close statement and read blobs from it after close. Moreover, in the worst case that will anyway work - in old way over the wire.

With limiting cache size arrives one more tunable parameter and I'm afraid there are already too much of them: blob size limit per-attachment or per-statement, may be on per-field basis (at least on/off), default BPB, may be on per-field basis too. (Hmm - are there too many cases when >1 blob per row is returned ?)

Last but not least - is blob's inlining enabled by default? On my mind yes, but very reasonable (ie not too big) defaults should be used.

@sim1984
Copy link

sim1984 commented Nov 14, 2024

There should be cache size limits in any case. If you loaded 1000000 records (1 blob per record) at 16K, that's already 16G. But if I understand correctly, this will be provided that the user does not read these cached blobs as the records are fetched. Maybe it's worth limiting the blob cache to some amount, for example 1000 (configurable) and when the number of blobs becomes greater than this value, the oldest of them are removed from the cache.

And of course, this should be disabled/enabled at the statement level. And perhaps some dpb to set the default parameter.

@hvlad
Copy link
Member Author

hvlad commented Nov 14, 2024

Telling true my first thought was that cache is very tiny - just blobs from last fetched row, but this appears inefficient when we try to support various grids.

Yes, it was my thoughts too. Also, consider batch fetching, when whole batch of rows should be read from the wire - it will cache all corresponding blobs anyway.

First of all let's think about binding cache not to transaction but to request/statement. It's hardly typical to close statement and read blobs from it after close. Moreover, in the worst case that will anyway work - in old way over the wire.

It was in my very first version of code. Until I started to handle op_exec_immediate2 - it have no statement :)

It is possible to mark blobs by stmt id (when possible) and remove such blobs from transaction cache on statement close.
But I prefer to avoid it, so far. It gives a chance for the "not typical" apps to access cached blobs after statement close - I guess is it not so non-typical when there is no cursor, i.e. for 'EXECUTE PROCEDURE', etc.

With limiting cache size arrives one more tunable parameter and I'm afraid there are already too much of them: blob size limit per-attachment or per-statement, may be on per-field basis (at least on/off), default BPB, may be on per-field basis too. (Hmm - are there too many cases when >1 blob per row is returned ?)

If there will be too many parameters, we can put them into separate dedicated interface, say IClientBlobCache, that will be implemented by Remote provider only.

And I'm sure there is applications that have many blobs in its resultsets. Look at monitoring tables, for example: MON$STATEMENTS have two blobs, there are other.

Last but not least - is blob's inlining enabled by default? On my mind yes, but very reasonable (ie not too big) defaults should be used.

Currently it is enabled - else nobody could be able to test the feature ;)

One of the goals of this PR is to discuss and then implement necessary numbers of parameters and corresponding API to customize the blobs cache.

So far, I see two really required parameters: 'maximum blob size for inline sending' (per-statement or per-attachment- to be decided, it should be known to the server) and 'size of blob cache' (per-attachment, client-only). Others is 'good to have' but not highly required : BPB, per-field inlining.

@hvlad
Copy link
Member Author

hvlad commented Nov 15, 2024

The builds for testing could be found here:
https://github.com/FirebirdSQL/firebird/actions/runs/11836803458
Scroll page down to the 'Artifacts' section

src/remote/remote.h Outdated Show resolved Hide resolved
src/remote/remote.h Outdated Show resolved Hide resolved
src/remote/remote.h Outdated Show resolved Hide resolved
@sim1984
Copy link

sim1984 commented Nov 18, 2024

I tried to conduct experiments on a local network. There are no problems with latency there, however, I will provide some results of the experiment.

Run the query in different variants

select
  remark
from horse
where remark is not null

It contains 66794 small BLOBs.

Run IBExpert with this query and do FetchAll

Results Firebird-5.0.2.1567-0-9fbd574-windows-x64 (server + client):
640ms Memory consumption 38 MB (IBExpert)

Results Firebird-6.0.0.526-0-Initial-windows-x64 (server + client):
1s 187ms Memory consumption 385 MB (IBExpert)

Probably there will be a gain in networks with high latency. I will try to see in the near future. In the meantime, the experiment shows that default blob prefetching is not always useful and at least consumes more memory.

PS

select
  sum(octet_length(remark)) as len
from horse
where remark is not null
LEN
=========
6 558 101

Overhead seems quite large to save 6 MB.

Am I right in understanding that 16K of memory is always allocated for each BLOB? I also don't know how exactly BLOBs are handled in IBExpert, perhaps it doesn't close a fully read BLOB until the end of the query/transaction. What about the limitation of storing the last N BLOBs in the cache?

@AlexPeshkoff
Copy link
Member

AlexPeshkoff commented Nov 18, 2024 via email

@hvlad
Copy link
Member Author

hvlad commented Nov 18, 2024

Am I right in understanding that 16K of memory is always allocated for each BLOB?

Yes, and it was not introduced by this PR.

BTW, 66794 blobs should consume near 1GB, while you see about 350MB - what memory counter you look at ?
I tried with 67000 of blobs of 1024 bytes and see about 1.4GB increase of 'Private Bytes' and about 1.1GB increase of 'Virtual Memory' (it was with DEBUG build).

I also don't know how exactly BLOBs are handled in IBExpert, perhaps it doesn't close a fully read BLOB until the end of the query/transaction.

I doubt IBE reads any blob contents when shows data in grid - until user explicitly ask for it by moving mouse cursor over grid cell or by pressing '...' button in the cell. And debugger confirms it.

What about the limitation of storing the last N BLOBs in the cache?

It was proposed but we still have not defined what settings and API to manage them we need.

Thanks for testing !

@hvlad
Copy link
Member Author

hvlad commented Nov 18, 2024

@AlexPeshkoff : I think the time overhead is related with memory allocations.

@sim1984
Copy link

sim1984 commented Nov 18, 2024

I just looked at the task manager. It is clear that it does not display memory quite correctly, but here the difference is visible to the naked eye. And I have no claims to performance, I understand that slightly different conditions need to be tested (primarily in networks with high latency). Nevertheless, I consider this test useful to understand that without proper settings we can get excessive memory consumption at least.

@hvlad
Copy link
Member Author

hvlad commented Nov 19, 2024

As there is no better ideas, I offer following API changes

interface Statement : ReferenceCounted
{
...
version:	// 6.0
	// Inline blob transfer
	uint getMaxInlineBlobSize(Status status);
	void setMaxInlineBlobSize(Status status, uint size);
}
interface Attachment : ReferenceCounted
{
...

version:	// 6.0
	// Blob caching by client
	uint getBlobCacheSize(Status status);
	void setBlobCacheSize(Status status, uint size);

	// Inline blob transfer
	uint getMaxInlineBlobSize(Status status);
	void setMaxInlineBlobSize(Status status, uint size);
}

@AlexPeshkoff
Copy link
Member

AlexPeshkoff commented Nov 19, 2024 via email

@aafemt
Copy link
Contributor

aafemt commented Nov 19, 2024

I see no need for new methods in IAttachment, it can be handled by backward-compatible way using DPB and info items unless someone want to make such adjustments dynamically during attachment lifetime.

@sim1984
Copy link

sim1984 commented Nov 19, 2024

I see no need for new methods in IAttachment, it can be handled by backward-compatible way using DPB and info items unless someone want to make such adjustments dynamically during attachment lifetime.

The presence of methods in IAttachment does not cancel the need for dpb tags to initially set these parameters when connecting. And yes, since the cache itself is for each transaction, it makes sense to change these parameters during the connection. If I understand correctly, the value from setBlobCacheSize is passed to the transaction at startup, and IAttachment::setMaxInlineBlobSize is used during IAttachment::execute and IAttachment::OpenCursor, and passes the default value to IStatement when calling IAttachment::prepare.

@hvlad
Copy link
Member Author

hvlad commented Nov 23, 2024

New methods was added:

interface Attachment
...
	// Blob caching by client
	uint getMaxBlobCacheSize(Status status);
	void setMaxBlobCacheSize(Status status, uint size);

	// Inline blob transfer
	uint getMaxInlineBlobSize(Status status);
	void setMaxInlineBlobSize(Status status, uint size);
...
interface Statement
...
	// Inline blob transfer
	uint getMaxInlineBlobSize(Status status);
	void setMaxInlineBlobSize(Status status, uint size);

New DPB and info items will be added later, after interface changes above stabilized finally.

Common behaviour

All methods above is implemented by both Remote and Engine providers.

Engine provider set isc_wish_list error in status and returns zero, when appropriate.
Remote provider check protocol version and get/set internal object data (no network roundtrip) or set isc_wish_list error in status and returns zero, when appropriate.

Inline blob size

Attachment::setMaxInlineBlobSize() set default value for inline blob size. This value is used by Attachment::execute() and Attachment::openCursor().

Also, this value is assigned to the new Statement instance created by Attachment::prepare(). It can be changed for given statement using Statement::setMaxInlineBlobSize() but it should be done before call of Statement::execute() or Statement::openCursor().

Default value for inline blob size value is 16KB. To disable inline blob transfer, set inline blob size to zero.

Currently, maximum value of inline blob size is not limited. It is open for discussion if some limit should be introduced or not and what value to choose. Obvious value of maximum possible segment size (64KB-2, or 65534 bytes) could be recommended for cursors (many blobs to cache). But in case of single row resultset it is not so obvious. Protocol is not limited by 2-bytes length, if I'm not mistaken.

The value of inline blob size is transferred within op_execute, op_execute2 and op_exec_immediate2 packets for supported protocol versions only.

Client blobs caching

The content of inline blobs is cached on the Attachment level. The blob is removed from cache after application uses it (opened and then closed) or if application opened same blob using custom BPB.

The size of client cache of blobs is limited. Default size is 10MB and can be changed using Attachment::getMaxBlobCacheSize(). There is no upper or lower limit for this value. The limit change is not applied immediately, i.e. if new limit is less than currently used size - nothing happens. If blob cache have no space for new inlined blob - such blob is discarded silently.

Note, currently per-blob buffer is pre-allocated and its size is 16KB. It means that smaller blobs requires no additional memory re-allocations but occupy 16KB in memory (and in blobs cache) despite of its real size. I considering changes in this regards.

@sim1984
Copy link

sim1984 commented Dec 16, 2024

As this fix one problem, but create second.

Is it measured fact or just a guess ? Note, blobs cache size is limited by 10MB by default.

No, I haven't tested it yet; I'm only referring to @sim1984 information about high memory usage.

Increased memory consumption was in the very first implementation. In the second, when @hvlad introduced buffer size restrictions for BLOBs, such effects no longer exist. The client consumes slightly more memory per buffer size (approximately).

@livius2
Copy link

livius2 commented Dec 20, 2024

Increased memory consumption was in the very first implementation.

Great news :)

@hvlad
Copy link
Member Author

hvlad commented Jan 27, 2025

Implementing new DPB items to set blobs cache size and inline blob size and have a question.
What is expected / correct behavior when DPB item(s) is not supported by underlying wire protocol but not crucial for attachment itself (i.e. could be ignored with no harm) : raise error on attach, put warning into status or silently ignore such items ?

@AlexPeshkoff
Copy link
Member

AlexPeshkoff commented Jan 27, 2025 via email

@dyemanov
Copy link
Member

Implementing new DPB items to set blobs cache size and inline blob size and have a question. What is expected / correct behavior when DPB item(s) is not supported by underlying wire protocol but not crucial for attachment itself (i.e. could be ignored with no harm) : raise error on attach, put warning into status or silently ignore such items ?

Either ignore them or return a warning.

@hvlad
Copy link
Member Author

hvlad commented Jan 27, 2025

Initial JS approach was to ignore unknown & not supported DPB items.

This is a bit different case - items is known and supported (by the code), it just can't be used with current protocol version (that gets known after attach only, btw).
Thus I had a doubts.
Thanks for the opinions - I agree, ignoring such items is the correct way to go.

… and inline blob size.

Add missed checks for recently introduced API routines.
@AlexPeshkoff
Copy link
Member

AlexPeshkoff commented Jan 27, 2025 via email

@hvlad
Copy link
Member Author

hvlad commented Jan 28, 2025

One more question: wire protocol is already allows to pass buffers larger than MAX_USHORT, but blob-related code in Remote subsystem uses USHORT's for various lengths (see struct Rbl and its usage).
I can limit max blob inline size by MAX_USHORT or change struct Rbl and related code to handle larger buffers.
Is there will be benefit when sending large buffers (>= 64KB) at once ?
Opinions ?

@AlexPeshkoff
Copy link
Member

AlexPeshkoff commented Jan 28, 2025 via email

@sim1984
Copy link

sim1984 commented Jan 28, 2025

I would like to make one more improvement. It is not directly related to the ticket, but it has been long overdue.

We have a search function in streaming BLOBs

int IBlob::seek(StatusType* status, int mode, int offset)

I think it is high time to move away from the 2G limit and introduce a new function that does not have such a limit

int64_t IBlob::seek2(StatusType* status, int mode, int64_t offset)

See also #550

And also add

void IBlob::getInfo2(StatusType* status,
  unsigned itemsLength,
  const unsigned char* items,
  unsigned bufferLength,
  unsigned char* buffer)

Which would return 64-bit length to get BLOBs over 2GB.

@dyemanov
Copy link
Member

Larger buffers are usually better compressible, although I doubt the difference will be significant. But if this change decreases number of our (application-level) round-trips it may be a good bonus.

@hvlad
Copy link
Member Author

hvlad commented Jan 28, 2025

Thanks.

Note, the implementation limit of MAX_USHORT is applied to the internal buffer size, not to the blob data size, because in Remote all blobs (both streamed and segmented) passed as segments with 2-bytes length prefix.
Thus buffer of MAX_USHORT bytes can't fit blob of the same size.

If this is not a problem, I'll going to follow Alex advise and limit max inline blob size by MAX_USHORT value, at least for now.

@hvlad
Copy link
Member Author

hvlad commented Jan 28, 2025

I would like to make one more improvement. It is not directly related to the ticket, but it has been long overdue.

For a good reason, I think ;)

We have a search function in streaming BLOBs

int IBlob::seek(StatusType* status, int mode, int offset)

I think it is high time to move away from the 2G limit and introduce a new function that does not have such a limit

For what ? Who store 2GB+ blob at database ? Is it really practical ?
And I not going to address it in this PR anyway.

int64_t IBlob::seek2(StatusType* status, int mode, int64_t offset)

See also #550

And also add

void IBlob::getInfo2(StatusType* status,
  unsigned itemsLength,
  const unsigned char* items,
  unsigned bufferLength,
  unsigned char* buffer)

Which would return 64-bit length to get BLOBs over 2GB.

This is not necessary as API already allows to pass 64-bit ints at info buffers.
The question is if apps ready to get it.

@aafemt
Copy link
Contributor

aafemt commented Jan 28, 2025

The question is if apps ready to get it.

If it is using isc_portable_integer() instead of deprecated isc_vax_integer() - most likely yes.

@AlexPeshkoff
Copy link
Member

AlexPeshkoff commented Jan 28, 2025 via email

@dyemanov
Copy link
Member

On 1/28/25 18:24, Vlad Khorsun wrote: For what ? Who store 2GB+ blob at database ? Is it really practical ? And I not going to address it in this PR anyway.
In addition I have big doubts that engine is ready for such blobs.

It does work (except incorrect length stored/reported but this is fixable).

@hvlad
Copy link
Member Author

hvlad commented Jan 28, 2025

On 1/28/25 18:24, Vlad Khorsun wrote: For what ? Who store 2GB+ blob at database ? Is it really practical ? And I not going to address it in this PR anyway.
In addition I have big doubts that engine is ready for such blobs.

It does work (except incorrect length stored/reported but this is fixable).

Sure. But this doesn't answer my question ;)

@dyemanov
Copy link
Member

dyemanov commented Jan 28, 2025

On 1/28/25 18:24, Vlad Khorsun wrote: For what ? Who store 2GB+ blob at database ? Is it really practical ? And I not going to address it in this PR anyway.
In addition I have big doubts that engine is ready for such blobs.

It does work (except incorrect length stored/reported but this is fixable).

Sure. But this doesn't answer my question ;)

I'm not going to argue whether it's practical or not ;-) Personally, I'd rather avoid that if possible. But anyway, blobs beyond 2GB are documented as allowed thus they should be usable.

But surely this is for another day and unrelated to this PR.

@sim1984
Copy link

sim1984 commented Jan 28, 2025

On 1/28/25 18:24, Vlad Khorsun wrote: For what ? Who store 2GB+ blob at database ? Is it really practical ? And I not going to address it in this PR anyway.
In addition I have big doubts that engine is ready for such blobs.

It does work (except incorrect length stored/reported but this is fixable).

Sure. But this doesn't answer my question ;)

I'm not going to argue whether it's practical or not ;-) Personally, I'd rather avoid that if possible. But anyway, blobs beyond 2GB are documented as allowed thus they should be usable.

But surely this is for another day and unrelated to this PR.

That's exactly what I wanted to say. I don't really think it's practical to store even 1GB in a BLOB, but it's allowed. As far as I know, our BLOBs can store up to 32GB. However, the API and network protocol do not allow seek in such BLOBs. This is more of a desire to make the API more consistent with the capabilities of the engine. And of course, this is not the PR for this. I just decided to remind you since we're already dealing with BLOBs.

Sorry for digressing from the topic of discussion.

@hvlad
Copy link
Member Author

hvlad commented Jan 29, 2025

Maximum inline blob size is limited by MAX_USHORT (65535) bytes now.
The limit is applied to the buffer size, not to the "clear" blob data, i.e. it includes additional 2 bytes per each segment.

When max inline blob value is set (using API or DPB) and the passed value is greater than MAX_USHORT, it is decreased to MAX_USHORT silently.

@hvlad
Copy link
Member Author

hvlad commented Jan 31, 2025

Note, currently per-blob buffer is pre-allocated and its size is 16KB. It means that smaller blobs requires no additional memory re-allocations but occupy 16KB in memory (and in blobs cache) despite of its real size. I considering changes in this regards.

Done.

@hvlad
Copy link
Member Author

hvlad commented Jan 31, 2025

The updated builds for testing could be found here:
https://github.com/FirebirdSQL/firebird/actions/runs/13076211167
Scroll page down to the 'Artifacts' section.

I consider this PR as feature-complete now.

@hvlad hvlad marked this pull request as ready for review January 31, 2025 16:59
@hvlad hvlad merged commit 8343c72 into master Feb 5, 2025
48 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants