Pivotal Knowledge Base

Follow

Greenplum Error: "ERROR: Invalid Memory Alloc Request Size 1073742081 (context 'xxxx') (mcxt.c:1257) (mcxt.c:477)"

Environment

 Product  Version
 Pivotal Greenplum  4.3.x

Symptom

On inserting or selecting data with tuple/field size greater than 1 GB, Greenplum returns the following error message:

'ERROR: invalid memory alloc request size 1073742081 (context 'SOLR CURL CONTEXT') (mcxt.c:1257) (mcxt.c:477) (seg28 slice5 sdw5:40004 pid=10636) (cdbdisp.c:1326); Error while executing the query' 

Cause

What is an invalid memory alloc request size error?

The maximum allowed size for a tuple/field in Greenplum is 1 GB. When data in a row exceeds more than 1 GB you will see this error.

Note: This error does not indicate operating system running out of memory (RAM/SWAP) but Greenplum limiting allocation of more than 1 GB to a tuple.

When does this issue occur?

The below example details the scenario in which this issue can occur: 

In the below example, 'GPDBGPDBGP' is a 10 character string (1 char = 1 byte). If we create an aggregated string of 10 characters repeated 10 times i.e. 100 bytes and a string_to_array function succeeds, this error can be seen:

gpadmin=# create table virtu as select string_agg('GPDBGPDBGP') doc from generate_series(1,10);
SELECT 1

gpadmin=# select * from virtu;
doc
------------------------------------------------------------------------------------------------------
GPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGP
(1 row)

gpadmin=# select pg_size_pretty(length(doc)) from virtu;
pg_size_pretty
----------------
100 bytes (1 row)

gpadmin=# select string_to_array(doc,'|') from virtu;
string_to_array
--------------------------------------------------------------------------------------------------------
{GPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGPGPDBGPDBGP}
(1 row)

Scaling to a greater string size

Creating a 250 MB string:

gpadmin=# create table virtu as select string_agg('GPDBGPDBGP') doc from generate_series(1,1024*1024*25);
SELECT 1
gpadmin=# select pg_size_pretty(length(doc)) from virtu;
pg_size_pretty 
----------------
250 MB
(1 row)

Executing string_to_array function on a 250 MB string

psql -c "select string_to_array(doc,'|') from virtu" > string_to_array_250MB.out

[gpadmin@mdw ~]$ ls -lrth | tail -1
-rw------- 1 gpadmin gpadmin 751M Feb 1 19:43 string_to_array_250MB.out

The output is approximately three times 751 MB. To verify that it actually wrote the string use:

grep GP string_to_array_250MB.out | less

What is the limit?

The maximum allowed size is 250 MB before the delimiter as Greenplum uses a wide character data type which consumes four times the size and a combined row size of less than 1 GB.

gpadmin=# drop table virtu;
DROP TABLE 
gpadmin=# create table virtu as select string_agg('GPDBGPDBGP') doc from generate_series(1,1024*1024*26); 
SELECT 1 
gpadmin=# select pg_size_pretty(length(doc)) from virtu; 
pg_size_pretty
---------------- 
260 MB
(1 row) 

gpadmin=# select string_to_array(doc,'P') from virtu; ERROR: invalid memory alloc request size 1073741824 (context 'accumArrayResult') (mcxt.c:1257) (mcxt.c:477) (seg13 slice1 sdw3:60001 pid=25585) (cdbdisp.c:1326)

In the master logs, the following error should be present (If the stack trace is different from what is observed here or string_to_array is not being used, please contact Pivotal Support):

"ERROR","XX000","invalid memory alloc request size 1073741824 (context 'accumArrayResult') (mcxt.c:1257) (mcxt.c:477)  (seg13 slice1 sdw3:60001 pid=19283) (cdbdisp.c:1326)",,,,,,"select string_to_array(doc,'P') from virtu;",0,,"cdbdisp.c",1326,"Stack trace:
1    0xb0768e postgres errstart + 0x4de
2    0xbe8595 postgres cdbdisp_finishCommand + 0x135
3    0xbe8768 postgres cdbdisp_handleError + 0x128
4    0x76a34c postgres mppExecutorCleanup + 0x4c
5    0x74e9ea postgres ExecutorRun + 0x21a
6    0x99fc21 postgres PortalRun + 0x6f1
7    0x995b3c postgres <symbol not found> + 0x995b3c
8    0x999a76 postgres PostgresMain + 0x28f6
9    0x8f762e postgres <symbol not found> + 0x8f762e
10   0x8fa3b0 postgres PostmasterMain + 0xff0
11   0x7fc44f postgres main + 0x44f
12   0x3244e1d9f4 libc.so.6 __libc_start_main + 0xf4
13   0x4c4619 postgres <symbol not found> + 0x4c4619
"

In the example above, please note that the delimiter has been changed. The delimiter used is character 'P'. This query returns delimited strings which are less than 250 MB but the collective size is greater than 1 GB. This leads to the  "invalid memory alloc request size" error.   

In case there are multiple rows in the table, the query below can be used to find the size of individual rows and to check if one particular row is exceeding in size:

select pg_size_pretty(length(doc)) from virtu;

Note: As highlighted in the error message above, the context is "accumArrayResult". This context can be different for different scenarios, example - below is another most frequently faced error when handling a row of size greater than 1 GB while using GPTEXT.

'ERROR: invalid memory alloc request size 1073742081 (context 'SOLR CURL CONTEXT') (mcxt.c:1257) (mcxt.c:477) (seg28 slice5 sdw5:40004 pid=10636) (cdbdisp.c:1326); Error while executing the query' 

The reason behind this error is the same as discussed in this article. The only difference is context, which is logged for informational purposes, detailing which part of the code base generates this error.

Resolution

Limit the size of data in the row to 1 GB, row size greater than 1 GB is not permitted in Greenplum.

 

 

Comments

Powered by Zendesk