What type of flat file contains plain text with delimiters to set apart the columns?

-type

Possible values: xml, text, ods, xls, xlsx

Defines the type of the input file. This is only needed if the input file has a non-standard file extensions. If this parameter is not specified, the import type is derived from the input file's extension.

This parameter supports code-completion. If invoked, it will only display available import types.

-mode

Defines how the data should be sent to the database. Possible values are 'insert', 'update', 'insert,update' and 'update,insert' For details please refer to the update mode explanation.

For some DBMS, the additional modes: 'upsert' and 'insertIgnore' are supported. For details please refer to the native upsert and native insertIgnore explanation.

This parameter supports code-completion.

-file

Defines the full name of the input file. Alternatively you can also specify a directory [using -sourcedir] from which all files are imported.

-table

Defines the table into which the data should be imported

This parameter is ignored, if the files are imported using the -sourcedir parameter

This parameter supports code-completion.

-sourceDir

Defines a directory which contains import files. All files from that directory will be imported. If this switch is used with text files and no target table is specified, then it is assumed that each filename [without the extension] defines the target table. If a target table is specified using the -table parameter, then all files will be imported into the same table. The -deleteTarget will be ignored if multiple files are imported into a single table.

-extension

When using the -sourcedir switch, the extension for the files can be defined. All files ending with the supplied value will be processed. [e.g. -extension=csv]. The extension given is case-sensitive [i.e. TXT is something different than txt

-ignoreOwner

If the file names imported with from the directory specified with -sourceDir contain the owner [schema] information, this owner [schema] information can be ignored using this parameter. Otherwise the files might be imported into a wrong schema, or the target tables will not be found.

-excludeFiles

Using -excludeFiles, files from the source directory [when using -sourceDir] can be excluded from the import. The value for this parameter is a comma separated list of partial names. Each file that contains at least one of the values supplied in this parameter is ignored. -excludeFiles=back,data will exclude any file that contains the value back or data in it, e.g.: backup, to_back, log_data_store etc.

-skipTargetCheck

Normally WbImport will check if the specified target table exists and will retrieve the column definition for the target table through the JDBC API. However, some JDBC drivers do not always return all table information e.g. for temporary tables. If you know that the target table exists, the parameter -skipTargetCheck=true can be used to tell WbImport to not verify the existence of the table through the JDBC API.

However, WbImport needs to know the data types of the table columns. When -skipTargetCheck is enabled, the data types are detected by running a SELECT * statement to fetch the definition of the columns. For JDBC drivers supporting meta data for prepared statements, the query is only prepared, but not executed. For drivers not supporting that, the query is executed and the result is limited to a single row.

-checkDependencies

When importing more than one file [using the -sourcedir switch], into tables with foreign key constraints, this switch can be used to import the files in the correct order [child tables first]. When -checkDependencies=true is passed, SQL Workbench/J will check the foreign key dependencies for all tables. Note that this will not check dependencies in the data. This means that e.g. the data for a self-referencing table [parent/child] will not be order so that it can be imported. To import self-referencing tables, the foreign key constraint should be set to "initially deferred" in order to postpone evaluation of the constraint until commit time.

-commitEvery

If your DBMS neeeds frequent commits to improve performance and reduce locking on the import table you can control the number of rows after which a COMMIT is sent to the server.

-commitEvery is a numeric value that defines the number of rows after which a COMMIT is sent to the DBMS. If this parameter is not passed [or a value of zero or lower], then the import is run as a single transaction that is committed at the end.

When using batch import and your DBMS requires frequent commits to improve import performance, the -commitBatch option should be used instead.

You can turn off the use of a commit or rollback during import completely by using the option -transactionControl=false.

Using -commitEvery means, that in case of an error the already imported rows cannot be rolled back, leaving the data in a potential invalid state.

-transactionControl

Possible values: true, false

Controls if SQL Workbench/J handles the transaction for the import, or if the import must be committed [or rolled back] manually. If -transactionControl=false is specified, SQL Workbench/J will neither send a COMMIT nor a ROLLBACK at the end. This can be used when multiple files need to be imported in a single transaction. This can be combined with the cleanup and error scripts in batch mode.

-continueOnError

Possible values: true, false

This parameter controls the behavior when errors occur during the import. The default is true, meaning that the import will continue even if an error occurs during file parsing or updating the database. Set this parameter to false if you want to stop the import as soon as an error occurs.

The default value for this parameter can be controlled in the settings file and it will be displayed if you run WbImport without any parameters.

With PostgreSQL continueOnError will only work, if the use of savepoints is enabled using -useSavepoint=true.

-emptyFile

Possible values: ignore, warning, fail

This parameter controls the behavior when an empty file [i.e. with a length of zero bytes] is used for the input file. ignore means the file is ignored, no warning will be shown or written to the log file. warning means the file is ignored, but a warning will be shown and logged. With fail an empty file will be treated as an error unless -continueOnError=true is specified.

The default value is fail

-useSavepoint

Possible values: true, false

Controls if SQL Workbench/J guards every insert or update statement with a savepoint to recover from individual error during import, when continueOnError is set to true.

Using a savepoint for each DML statement can drastically reduce the performance of the import.

-keyColumns

Defines the key columns for the target table. This parameter is only necessary if import is running in UPDATE mode.

It is assumed that the values for the key columns will never be NULL.

This parameter is ignored if files are imported using the -sourcedir parameter.

-ignoreIdentityColumns

Possible values: true, false

Controls if identity or auto-increment columns will be included in the import.

If this is used, the JDBC driver must correctly report the column to be excluded as an AUTOINCREMENT column. This can be verified in the table definition display of the DbExplorer. If the column is reported with YES for the AUTOINCREMENT property, then this column will be excluded during the import.

-overrideIdentity

Possible values: system, user

Controls the option to override identity columns, for DBMS that support the standard SQL's OVERRIDE option. If the option is specified, values from the import file are sent to the database, effectively disabling the automatic generation of identity columns.

The option system will use OVERRIDING SYSTEM VALUE in the generated INSERT statement. The option user will use OVERRIDING USER VALUE.

No check is done, if the target DBMS supports the option.

The default is, to not use any overriding option.

-schema Defines the schema into which the data should be imported. This is necessary for DBMS that support schemas, and you want to import the data into a different schema, then the current one. -encoding

Defines the encoding of the input file [and possible CLOB files]

If code-completion is invoked for this parameter, it will show a list of encodings defined through the configuration property workbench.export.defaultencodings This is a comma-separated list that can be changed using WbSetConfig. If this property is not defined, all available encodings will be shown.

-deleteTarget

Possible values: true, false

If this parameter is set to true, data from the target table will be deleted [using DELETE FROM ...] before the import is started. This parameter will only be used if -mode=insert is specified.

This parameter is ignored for spreadsheet imports.

-truncateTable

Possible values: true, false

This is essentially the same as -deleteTarget, but will use the command TRUNCATE to delete the contents of the table. For those DBMS that support this command, deleting rows is usually faster compared to the DELETE command, but it cannot be rolled back. This parameter will only be used if -mode=insert is specified.

-batchSize

A numeric value that defines the size of the batch queue. Any value greater than 1 will enable batch mode. If the JDBC driver supports this, the INSERT [or UPDATE] performance can be increased drastically.

This parameter will be ignored if the driver does not support batch updates or if the mode is not UPDATE or INSERT [i.e. if -mode=update,insert or -mode=insert,update is used].

-commitBatch

Possible values: true, false

If using batch execution [by specifying a batch size using the -batchSize parameter] each batch will be committed when this parameter is set to true. This is slightly different to using -commitEvery with the value of the -batchSize parameter. The latter one will add a COMMIT statement to the batch queue, rather than calling the JDBC commit[] method. Some drivers do not allow to add different statements in a batch queue. So, if a frequent COMMIT is needed, this parameter should be used.

When you specify -commitBatch the parameter -commitEvery will be ignored. If no batch size is given [using -batchSize, then -commitBatch will also be ignored.

-updateWhere

When using update mode an additional WHERE clause can be specified to limit the rows that are updated. The value of the -updatewhere parameter will be added to the generated UPDATE statement. If the value starts with the keyword AND or OR the value will be added without further changes, otherwise the value will be added as an AND clause enclosed in brackets. This parameter will be ignored if update mode is not active.

-startRow

A numeric value to define the first row to be imported. Any row before the specified row will be ignored. The header row is not counted to determine the row number. For a text file with a header row, the physical line 2 is row 1 [one] for this parameter.

When importing text files, empty lines in the input file are silently ignored and do not add to the count of rows for this parameter. So if your input file has two lines to be ignored, then one empty line and then another line to be ignored, startRow must be set to 4.

-endRow A numeric value to define the last row to be imported. The import will be stopped after this row has been imported. When you specify -startRow=10 and -endRow=20 11 rows will be imported [i.e. rows 10 to 20]. If this is a text file import with a header row, this would correspond to the physical lines 11 to 21 in the input file as the header row is not counted. -columnFilter

This defines a filter on column level that selects only certain rows from the input file to be sent to the database. The filter has to be defined as column1="regex",column2="regex". Only Rows matching all of the supplied regular expressions will be included by the import.

This parameter is ignored when the -sourcedir parameter is used.

-badFile

If -continueOnError=true is used, you can specify a file to which rejected rows are written. If the provided filename denotes a directory a file with the name of the import table will be created in that directory. When doing multi-table inserts you have to specify a directory name.

If a file with that name exists it will be deleted when the import for the table is started. The file will not be created unless at least one record is rejected during the import. The file will be created with the same encoding as indicated for the input file[s].

-maxLength

With the parameter -maxLength you can truncate data for character columns [VARCHAR, CHAR] during import. This can be used to import data into columns that are not big enough [e.g. VARCHAR columns] to hold all values from the input file and to ensure the import can finish without errors.

The parameter defines the maximum length for certain columns using the following format: -maxLength='firstname=30,lastname=20' Where firstname and lastname are columns from the target table. The above example will limit the values for the column firstname to 30 characters and the values for the column lastname to 20 characters. If a non-character column is specified this is ignored. Note that you have quote the parameter's value in order to be able to use the "embedded" equals sign.

-booleanToNumber

Possible values: true, false

In case you are importing a boolean column [containing "true", "false"] into a numeric column in the target DBMS, SQL Workbench/J will automatically convert the literal true to the numeric value 1 [one] and the literal false to the numeric value 0 [zero]. If you do not want this automatic conversion, you have to specify -booleanToNumber=false for the import. The default values for the true/false literals can be overwritten with the -literalsFalse and -literalsTrue switches.

To store different values than 0/1 in the target column, use the parameters -numericFalse and -numericTrue

-numericFalse -numericTrue

These parameters control the conversion of boolean literals into numbers.

If these parameters are used, any text input that is identified as a "false" literal, will be stored with the number specified with -numericFalse. Any text input that is identified as "true" will be stored as the number specified with -numericFalse.

To use -1 for false and 1 for true, use the following parameters: -numericFalse='-1' -numericTrue='1'. Note that '-1' must be quoted due to the dash. If these parameters are used, -booleanToNumber will be assumed true implicitely.

These parameters can be combined with -literalsFalse and -listeralsTrue.

Please note:

  • This conversion is only applied for "text" input values. Valid numbers in the input file will not be converted to the values specified with -numericFalse or -numericTrue. This means that you cannot change a 0 [zero] in the input file into a -1 in the target column.

-literalsFalse -literalsTrue

These parameters control the conversion of boolean literals into boolean values.

These two switches define the text values that represent the [boolean] values false and true in the input file. This conversion is applied when storing the data in a column that is of type boolean in the database.

The value to these switches is a comma separated list of literals that should be treated as the specified value, e.g.: -literalsFalse='false,0' -literalsTrue='true,1' will define the most commonly used values for true/false.

Please note:

  • The definition of the literals is case sensitive!
  • You always have to specify both switches, otherwise the definition will be ignored

-dateFormat

The format for date columns.

This is only used for text imports and when using -stringDates=true with spreadsheet imports

-timestampFormat

The format for datetime [or timestamp] columns in the input file.

This is only used for text imports and when using -stringDates=true with spreadsheet imports

-locale

The locale [language] to be used for parsing date and timestamp values.

This is only used for text imports and spreadsheet imports using -stringDates=true

This parameter supports code-completion.

-illegalDateIsNull

If this is set to true, illegal dates [such as February, 31st] or malformed dates inside the input file will be treated as a null value.

This is only used for text and spreadsheet imports

-trimValues

Possible values: true, false

Controls whether leading and trailing whitespace are removed from the input values before they are stored in the database. When used in combination with -emptyStringIsNull=true this means that a column value that contains only whitespace will be stored as NULL in the database.

The default value for this parameter can be controlled in the settings file and it will be displayed if you run WbImport without any parameters.

Note that, input values for non character columns [such as numbers or date columns] are always trimmed before converting them to their target datatype.

This is only used for text and spreadsheet imports

-emptyStringIsNull

Possible values: true, false

Controls whether input values for character type columns with a length of zero are treated as NULL [value true] or as an empty string.

The default value for this parameter is true

Note that, input values for non character columns [such as numbers or date columns] that are empty or consist only of whitespace will always be treated as NULL.

This is only used for text and spreadsheet imports

-columnExpression

This parameter can be used to define SQL expressions to be used instead of a plain column reference for the INSERT statement. This is useful to apply SQL functions directly on the server to the value retrieved from the input file.

The format is -columnExpression=column_name:expression. The parameter can be repeated multiple times for multiple columns.

The position of the input value is denoted by a question mark. The question mark must not be quoted, even if the input is a string/character value.

As an example, this can be used to convert a comma separated string into a Postgres array: -columnExpression=tags:string_to_array[?, ',']. Or this can be used to round a number value: -columnExpression=price:round[?, 3]

-constantValues

With this parameter you can supply constant values for one or more columns that will be used when inserting new rows into the database.

The constant values will only be used when inserting rows [e.g. using -mode=insert]

The format of the values is -constantValues="column1=value1,column2=value2". The parameter can be repeated multiple times, to make quoting easier: -constantValues="column1=value1" -constantValues="column2=value2" The values will be converted by the same rules as the input values from the input file. If the value for a character column is enclosed in single quotes, these will be removed from the value before sending it to the database. To include single quotes at the start or end of the input value you need to use two single quotes, e.g.-constantValues="name=''Quoted'',title='with space'" For the field name the value 'Quoted' will be sent to the database. for the field title the value with space will be sent to the database.

To specify a function call to be executed, enclose the function call in ${...}, e.g. ${mysequence.nextval} or ${myfunc[]}. The supplied function will be put into the VALUES part of the INSERT statement without further checking [after removing the ${ and } characters, of course]. So make sure that the syntax is valid for your DBMS. If you do need to store a literal like ${some.value} into the database, you need to quote it: -constantValues="varname='${some.value}'".

You can also specify a SELECT statement that retrieves information from the database based on values from the input file. This is useful when the input file contains e.g. values from a lookup table [but not the primary key from the lookup table].

The syntax to specify a SELECT statement is similar to a function call: -constantValues="$@{SELECT type_id FROM type_definition WHERE type_name = $4" where $4 references the fourth column from the input file. The first column is $1 [not $0].

The parameter for the SELECT statement do not need to be quoted as internally a prepared statement is used. However the values in the input file must be convertible by the JDBC driver. If the input column from the source file is not part of the target table, the value will be passed as a string to the statement. This means that the SQL query should cast the parameter to the appropriate data type if needed, e.g. where some_id = casst[$5 as integer].

In addition to the function call or SELECT statements, WbImport provides four variables that can be used to access the name of the currently imported file. This can be used to store the source file of the data in the target table.

The following three variables are supported

  • _wb_import_file_path this contains the full path and file name of the current import file
  • _wb_import_file_name this contains only the file name [without the path]
  • _wb_import_file_dir this contains the name of the directory in which the file is stored
  • _wb_import_line_number this will be replaced with the current line number of the input file during the import

Please refer to the examples for more details on the usage.

-insertSQL

Define the statement to be used for inserting rows.

This can be used to use hints or customize the generated INSERT statement. The parameter may only contain the INSERT INTO part of the statement [i.e. INSERT INTO is the default if nothing is specified]. This can be used to pass special hints to the database, e.g. to specify an append hint for Oracle:

You have to quote the parameter value using single quotes, otherwise comments will be removed from the SQL statement!

-insertSQL='INSERT /*+ append */ INTO'
-adjustSequences

Possible values: true, false

For DBMS that support sequences which are associated with a column, this parameter can be used to adjust the next value for the sequence to the maximum value of the imported data. This can also be used to synchronize identity columns for DBMS that allow overriding the generated values.

Currently this is implemented for PostgreSQL, DB2 [LUW], H2 Database and HyperSQL [aka HSQLDB].

-preTableStatement -postTableStatement

This parameter defines a SQL statement that should be executed before the import process starts inserting data into the target table. The name of the current table [when e.g. importing a whole directory] can be referenced using ${table.name}.

To define a statement that should be executed after all rows have been inserted and have been committed, you can use the -postTableStatement parameter.

These parameters can e.g. be used to enable identity insert for MS SQL Server:

-preTableStatement="set identity_insert ${table.name} on"
-postTableStatement="set identity_insert ${table.name} off"

Errors resulting from executing these statements will be ignored. If you want to abort the import in that case you can specify -ignorePrePostErrors=false and -continueOnError=false.

These statements are only used if more than one table is processed.

-runTableStatementOnError

Possible values: true, false

Controls the execution of the post-table statement in case an error occurred while importing the data. By default the post-table statement is executed even if the import was not successful. If this is should not happen, use -runTableStatementOnError=false.

-ignorePrePostErrors

Possible values: true, false

Controls handling of errors for the SQL statements defined through the -preTableStatement and -postTableStatement parameters. If this is set to true [the default], errors resulting from executing the supplied statements are ignored. If set to false then error handling depends on the parameter -continueOnError.

-showProgress

Valid values: true, false,

Control the update frequence in the status bar [when running in GUI mode]. The default is every 10th row is reported. To disable the display of the progress specify a value of 0 [zero] or the value false. true will set the progress interval to 1 [one].

What are the 4 types of database?

A relational database management system is one of four common types of systems you can use to manage your business data..
hierarchical database systems..
network database systems..
object-oriented database systems..

What is flat file database and relational database?

Flat file database is used to store data in a single table structure. Relational database is used to store data in a multiple-table structure. Flat file database can be accessed by a variety of software applications. Relational database uses a Relational Database Management System [RDBMS] to access the data.

Which of the following is an example of a flat file format?

One of the most common flat file examples is a comma-separated values [CSV] file. In a CSV file, table data is represented by lines of ASCII text.

What is flat table?

A Flat Table stores data that is not unique but still relevant to data in another table, where the same data may be used for many contacts. Flat Tables eliminate the potential for redundant data. For example, a list of songs that may appear on multiple play lists, vendor information, or product details.

Chủ Đề