

Depending on the number of files and table options parquet_s3_fdw may use one of the following execution strategies: Strategy It is also possible to specify a user defined function, which would return a list of file paths. max_open_files - the limit for the number of Parquet files open simultaneously.įoreign table may be created for a single Parquet file and for a set of files.files_func_arg - argument for the function, specified by files_func.
AWS POSTGRESQL RDS FDW FULL
files_func - user defined function that is used by parquet_s3_fdw to retrieve the list of parquet files on each query function must take one JSONB argument and return text array of full paths to parquet files.use_threads - enables Apache Arrow's parallel columns decoding/decompression (default false).use_mmap - whether memory map operations will be used instead of file read operations (default false).files_in_order - specifies that files specified by filename or returned by files_func are ordered according to sorted option and have no intersection rangewise this allows to use Gather Merge node on top of parallel Multifile scan (default false).sorted - space separated list of columns that Parquet files are presorted by that would help postgres to avoid redundant sorting when running query with ORDER BY clause or in other cases when having a presorted set is beneficial (Group Aggregate, Merge Join).dirname - path to directory having Parquet files to read.The mix of local path and S3 path is not supported

You can specify the path on AWS S3 by starting with s3://.
