1. Home
  2. /
  3. Docs
  4. /
  5. Articles Report Writer
  6. /
  7. Database Components
  8. /
  9. FD Components
  10. /
  11. FD Memtable
  12. /
  13. CSV Import and Export

CSV Import and Export

Articles can read and write CSV (comma-separated values) files using FD MemTable. CSV files are plain text files where each row is a record and each field is separated by a delimiter character — usually a comma. They are widely supported by spreadsheet applications like Microsoft Excel and Google Sheets.

Importing a CSV File into FD MemTable

There are two ways to load a CSV file into an FD MemTable — automatically at report start using the DataFileName property, or manually from script using LoadFromCSV.

Automatic Loading at Report Start

The simplest approach is to set the DataFileName and DataFileFormat properties on the FD MemTable component. Articles loads the file automatically when the report starts — no script required.

  1. Select your FD MemTable in the designer
  2. Set DataFileName to the full path of your CSV file, for example C:\Data\customers.csv
  3. Set DataFileFormat to mffCSV
  4. Set CSVHasHeader to True if the first row of your file contains field names (this is the default)
  5. Leave CSVDelimiter as a comma and CSVQuoteChar as a double-quote unless your file uses different characters

When the report runs Articles loads the CSV file and makes the data available through the FD MemTable just like any other dataset. You can bind report bands to it and read values from script exactly as you would with any other FD MemTable.

CSV Parsing Properties

Three properties on FD MemTable control how the CSV file is parsed:

CSVDelimiter
The character that separates fields in each row. Default is a comma. Change this to a semicolon for European-format CSV files, or to a tab character for tab-separated files (.tsv or .txt).

CSVQuoteChar
The character used to wrap fields that contain the delimiter or a line break. Default is a double-quote. A field wrapped in quote characters is treated as a single value even if it contains commas or line breaks inside it. Articles handles doubled quote characters inside quoted fields automatically — for example "He said ""hello""" is read correctly as He said "hello".

CSVHasHeader
When True (default), the first row of the file is treated as field names and is not included in the data. When False, Articles generates field names automatically as F1, F2, F3 and so on, and the first row is treated as data.

Loading a CSV File from Script

To load a CSV file from script, call LoadFromCSV on the FD MemTable. This gives you full control over when the file is loaded and lets you change the file path at runtime:

procedure ReportOnStartReport(Sender: TObject);
begin
  FDMemTable1.CSVDelimiter  := ',';
  FDMemTable1.CSVQuoteChar  := '"';
  FDMemTable1.CSVHasHeader  := True;
  FDMemTable1.LoadFromCSV('C:\Data\customers.csv');
end;

If you have pre-defined FieldDefs on the FD MemTable, Articles creates the dataset structure first and then maps the CSV columns into the correct field types and sizes. If FieldDefs is empty, Articles infers the field names, types, and sizes automatically from the CSV header row and data.

Pre-Defining Fields for CSV Import

When Articles infers field types automatically from a CSV file it reads a sample of rows to make its best guess. For most files this works well, but if you need exact control over field types and sizes — for example to ensure an invoice number is treated as a string rather than an integer — define the FieldDefs before loading:

procedure ReportOnStartReport(Sender: TObject);
begin
  FDMemTable1.FieldDefs.Clear;
  FDMemTable1.FieldDefs.Add('InvoiceNumber', ftString,   20,  False);
  FDMemTable1.FieldDefs.Add('CustomerName',  ftString,   100, False);
  FDMemTable1.FieldDefs.Add('InvoiceDate',   ftDate,     0,   False);
  FDMemTable1.FieldDefs.Add('Amount',        ftFloat,    0,   False);
  FDMemTable1.FieldDefs.Add('Status',        ftString,   20,  False);
  FDMemTable1.CreateDataSet;
  FDMemTable1.LoadFromCSV('C:\Data\invoices.csv');
end;

When FieldDefs are pre-defined, Articles creates the dataset structure first and BatchMove maps the CSV columns into those fields by name. The column names in the CSV header row must match the FieldDef names exactly.

Exporting Data to a CSV File

Any FD MemTable, FD Query, or FD Table can save its current data to a CSV file using SaveAsCSV. The dataset must be open before calling SaveAsCSV.

Saving from an FD MemTable:

FDMemTable1.SaveAsCSV('C:\Reports\Export\results.csv');

Saving from an FD Query:

FDQuery1.SaveAsCSV('C:\Reports\Export\customers.csv');

Saving from an FD Table:

FDTable1.SaveAsCSV('C:\Reports\Export\products.csv');

The exported file always includes a header row with the field names. Each row of data follows on its own line. Fields that contain the delimiter character or line breaks are automatically wrapped in double-quote characters.

Important — Reader and Writer Property Names

If you are working with CSV import and export directly in Delphi code using TFDBatchMove, be aware that the property names on the reader and writer are not symmetrical and do not follow obvious naming conventions. Getting these wrong is a common source of bugs:

On TFDBatchMoveTextReader (importing):

  • DataDef.Delimiter — the field separator character (the comma)
  • DataDef.Separator — the quote character (the double-quote)
  • DataDef.WithFieldNames — whether the first row is a header

On TFDBatchMoveTextWriter (exporting):

  • DataDef.Separator — the field separator character (the comma)
  • DataDef.Delimiter — the quote character (the double-quote)
  • DataDef.WithFieldNames — whether to write a header row

Note that Delimiter and Separator swap meaning between the reader and the writer. This is a FireDAC quirk — Delimiter means the field separator on the reader but the quote character on the writer. If you use Articles’ built-in SaveAsCSV and LoadFromCSV methods you do not need to worry about this — Articles handles it correctly for you. This note is only relevant if you are writing your own TFDBatchMove code.

Tab-Separated Files

Tab-separated files (.tsv or .txt) work the same way as CSV files. The only difference is the delimiter character. To import a tab-separated file, set CSVDelimiter to a tab character in script before calling LoadFromCSV:

FDMemTable1.CSVDelimiter := #9;
FDMemTable1.CSVHasHeader := True;
FDMemTable1.LoadFromCSV('C:\Data\export.txt');

Common Problems

Fields are being merged into one column
The CSV file uses a different delimiter than what CSVDelimiter is set to. For example the file uses semicolons but CSVDelimiter is set to a comma. Check the file in a text editor and set CSVDelimiter to match.

The first row of data is being treated as field names
CSVHasHeader is True but your file does not have a header row. Set CSVHasHeader to False so Articles treats the first row as data and generates field names automatically.

Field names are F1, F2, F3 instead of the actual column names
CSVHasHeader is False but your file has a header row. Set CSVHasHeader to True so Articles reads the first row as field names.

A numeric field is being imported as a string
Articles inferred the field type from the CSV data and guessed wrong. Pre-define your FieldDefs with the correct types before calling LoadFromCSV. See Pre-Defining Fields for CSV Import above.

Fields containing commas are being split incorrectly
The field value contains a comma but is not wrapped in quote characters in the CSV file. This is a problem with the CSV file itself — fields containing the delimiter must be wrapped in the quote character. If you control the file source, fix the export to wrap fields correctly. If not, consider using a different delimiter that does not appear in your data.

SaveAsCSV produces an error saying the dataset is not open
The FD MemTable, FD Query, or FD Table must be open before calling SaveAsCSV. Check that Active is True before calling the method.

Related Pages