1 Star 0 Fork 8

JustinXiong/Ora2pg

forked from Gitee 极速下载/Ora2pg 
加入 Gitee
与超过 1200万 开发者一起发现、参与优秀开源项目,私有仓库也完全免费 :)
免费加入
克隆/下载
贡献代码
同步代码
取消
提示: 由于 Git 不支持空文件夾,创建文件夹后会生成空的 .keep 文件
Loading...
README
GPL-3.0
NAME
    Ora2Pg - Oracle to PostgreSQL database schema converter

DESCRIPTION
    Ora2Pg is a free tool used to migrate an Oracle database to a PostgreSQL
    compatible schema. It connects your Oracle database, scans it
    automatically and extracts its structure or data, then generates SQL
    scripts that you can load into your PostgreSQL database.

    Ora2Pg can be used for anything from reverse engineering Oracle database
    to huge enterprise database migration or simply replicating some Oracle
    data into a PostgreSQL database. It is really easy to use and doesn't
    require any Oracle database knowledge other than providing the
    parameters needed to connect to the Oracle database.

FEATURES
    Ora2Pg consist of a Perl script (ora2pg) and a Perl module (Ora2Pg.pm),
    the only thing you have to modify is the configuration file ora2pg.conf
    by setting the DSN to the Oracle database and optionally the name of a
    schema. Once that's done you just have to set the type of export you
    want: TABLE with constraints, VIEW, MVIEW, TABLESPACE, SEQUENCE,
    INDEXES, TRIGGER, GRANT, FUNCTION, PROCEDURE, PACKAGE, PARTITION, TYPE,
    INSERT or COPY, FDW, QUERY, KETTLE, SYNONYM.

    By default Ora2Pg exports to a file that you can load into PostgreSQL
    with the psql client, but you can also import directly into a PostgreSQL
    database by setting its DSN into the configuration file. With all
    configuration options of ora2pg.conf you have full control of what
    should be exported and how.

    Features included:

            - Export full database schema (tables, views, sequences, indexes), with
              unique, primary, foreign key and check constraints.
            - Export grants/privileges for users and groups.
            - Export range/list partitions and sub partitions.
            - Export a table selection (by specifying the table names).
            - Export Oracle schema to a PostgreSQL 8.4+ schema.
            - Export predefined functions, triggers, procedures, packages and
              package bodies.
            - Export full data or following a WHERE clause.
            - Full support of Oracle BLOB object as PG BYTEA.
            - Export Oracle views as PG tables.
            - Export Oracle user defined types.
            - Provide some basic automatic conversion of PLSQL code to PLPGSQL.
            - Works on any platform.
            - Export Oracle tables as foreign data wrapper tables.
            - Export materialized view.
            - Show a  report of an Oracle database content.
            - Migration cost assessment of an Oracle database.
            - Migration difficulty level assessment of an Oracle database.
            - Migration cost assessment of PL/SQL code from a file.
            - Migration cost assessment of Oracle SQL queries stored in a file.
            - Generate XML ktr files to be used with Penthalo Data Integrator (Kettle)
            - Export Oracle locator and spatial geometries into PostGis.
            - Export DBLINK as Oracle FDW.
            - Export SYNONYMS as views.
            - Export DIRECTORY as external table or directory for external_file extension.
            - Dispatch a list of SQL orders over multiple PostgreSQL connections
            - Perform a diff between Oracle and PostgreSQL database for test purpose.
            - MySQL/MariaDB and Microsoft SQL Server migration.

    Ora2Pg does its best to automatically convert your Oracle database to
    PostgreSQL but there's still manual works to do. The Oracle specific
    PL/SQL code generated for functions, procedures, packages and triggers
    has to be reviewed to match the PostgreSQL syntax. You will find some
    useful recommendations on porting Oracle PL/SQL code to PostgreSQL
    PL/PGSQL at "Converting from other Databases to PostgreSQL", section:
    Oracle (http://wiki.postgresql.org/wiki/Main_Page).

    See http://ora2pg.darold.net/report.html for a HTML sample of an Oracle
    database migration report.

INSTALLATION
    All Perl modules can always be found at CPAN (http://search.cpan.org/).
    Just type the full name of the module (ex: DBD::Oracle) into the search
    input box, it will brings you the page for download.

    Releases of Ora2Pg stay at SF.net
    (https://sourceforge.net/projects/ora2pg/).

    Under Windows you should install Strawberry Perl
    (http://strawberryperl.com/) and the OSes corresponding Oracle clients.
    Since version 5.32 this Perl distribution include pre-compiled driver of
    DBD::Oracle and DBD::Pg.

  Requirement
    The Oracle Instant Client or a full Oracle installation must be
    installed on the system. You can download the RPM from Oracle download
    center:

        rpm -ivh oracle-instantclient12.2-basic-12.2.0.1.0-1.x86_64.rpm
        rpm -ivh oracle-instantclient12.2-devel-12.2.0.1.0-1.x86_64.rpm
        rpm -ivh oracle-instantclient12.2-jdbc-12.2.0.1.0-1.x86_64.rpm
        rpm -ivh oracle-instantclient12.2-sqlplus-12.2.0.1.0-1.x86_64.rpm

    or simply download the corresponding ZIP archives from Oracle download
    center and install them where you want, for example:
    /opt/oracle/instantclient_12_2/

    You also need a modern Perl distribution (perl 5.10 and more). To
    connect to a database and proceed to his migration you need the DBI Perl
    module > 1.614. To migrate an Oracle database you need the DBD::Oracle
    Perl modules to be installed.

    To install DBD::Oracle and have it working you need to have the Oracle
    client libraries installed and the ORACLE_HOME environment variable must
    be defined.

    If you plan to export a MySQL database you need to install the Perl
    module DBD::MySQL which requires that the mysql client libraries are
    installed.

    If you plan to export a SQL Server database you need to install the Perl
    module DBD::ODBC which requires that the unixODBC package is installed.

    On some Perl distribution you may need to install the Time::HiRes Perl
    module.

    If your distribution doesn't include these Perl modules you can install
    them using CPAN:

            perl -MCPAN -e 'install DBD::Oracle'
            perl -MCPAN -e 'install DBD::MySQL'
            perl -MCPAN -e 'install DBD::ODBC'
            perl -MCPAN -e 'install Time::HiRes'

    otherwise use the packages provided by your distribution.

  Optional
    By default Ora2Pg dumps export to flat files, to load them into your
    PostgreSQL database you need the PostgreSQL client (psql). If you don't
    have it on the host running Ora2Pg you can always transfer these files
    to a host with the psql client installed. If you prefer to load export
    'on the fly', the perl module DBD::Pg is required.

    Ora2Pg allows you to dump all output in a compressed gzip file, to do
    that you need the Compress::Zlib Perl module or if you prefer using
    bzip2 compression, the program bzip2 must be available in your PATH.

    If your distribution doesn't include these Perl modules you can install
    them using CPAN:

            perl -MCPAN -e 'install DBD::Pg'
            perl -MCPAN -e 'install Compress::Zlib'

    otherwise use the packages provided by your distribution.

  Instruction for SQL Server
    For SQL Server you need to install the unixodbc package and the Perl
    DBD::ODBC driver:

            sudo apt install unixodbc
            sudo apt install libdbd-odbc-perl

    or

            sudo yum install unixodbc
            sudo yum install perl-DBD-ODBC
            sudo yum install perl-DBD-Pg

    then install the Microsoft ODBC Driver for SQL Server. Follow the
    instructions relative to your operating system from here:

            https://docs.microsoft.com/fr-fr/sql/connect/odbc/linux-mac/installing-the-microsoft-odbc-driver-for-sql-server?view=sql-server-ver16

    Once it is done set the following in the /etc/odbcinst.ini file by
    adjusting the SQL Server ODBC driver version:

            [msodbcsql18]
            Description=Microsoft ODBC Driver 18 for SQL Server
            Driver=/opt/microsoft/msodbcsql18/lib64/libmsodbcsql-18.0.so.1.1
            UsageCount=1

    See ORACLE_DSN to know how to use the driver to connect to your MSSQL
    database.

  Installing Ora2Pg
    Like any other Perl Module Ora2Pg can be installed with the following
    commands:

            tar xjf ora2pg-x.x.tar.bz2
            cd ora2pg-x.x/
            perl Makefile.PL
            make && make install

    This will install Ora2Pg.pm into your site Perl repository, ora2pg into
    /usr/local/bin/ and ora2pg.conf into /etc/ora2pg/.

    On Windows(tm) OSes you may use instead:

            perl Makefile.PL
            gmake && gmake install

    This will install scripts and libraries into your Perl site installation
    directory and the ora2pg.conf file as well as all documentation files
    into C:\ora2pg\

    To install ora2pg in a different directory than the default one, simply
    use this command:

            perl Makefile.PL PREFIX=<your_install_dir>
            make && make install

    then set PERL5LIB to the path to your installation directory before
    using Ora2Pg.

            export PERL5LIB=<your_install_dir>
            ora2pg -c config/ora2pg.conf -t TABLE -b outdir/

  Packaging
    If you want to build the binary package for your preferred Linux
    distribution take a look at the packaging/ directory of the source
    tarball. There is everything to build RPM, Slackware and Debian
    packages. See README file in that directory.

  Installing DBD::Oracle
    Ora2Pg needs the Perl module DBD::Oracle for connectivity to an Oracle
    database from perl DBI. To get DBD::Oracle get it from CPAN a perl
    module repository.

    After setting ORACLE_HOME and LD_LIBRARY_PATH environment variables as
    root user, install DBD::Oracle. Proceed as follow:

            export LD_LIBRARY_PATH=/usr/lib/oracle/12.2/client64/lib
            export ORACLE_HOME=/usr/lib/oracle/12.2/client64
            perl -MCPAN -e 'install DBD::Oracle'

    If you are running for the first time it will ask many questions; you
    can keep defaults by pressing ENTER key, but you need to give one
    appropriate mirror site for CPAN to download the modules. Install
    through CPAN manually if the above doesn't work:

            #perl -MCPAN -e shell
            cpan> get DBD::Oracle
            cpan> quit
            cd ~/.cpan/build/DBD-Oracle*
            export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib
            export ORACLE_HOME=/usr/lib/oracle/11.2/client64
            perl Makefile.PL
            make
            make install

    Installing DBD::Oracle require that the three Oracle packages:
    instant-client, SDK and SQLplus are installed as well as the libaio1
    library.

    If you are using Instant Client from ZIP archives, the LD_LIBRARY_PATH
    and ORACLE_HOME will be the same and must be set to the directory where
    you have installed the files. For example:
    /opt/oracle/instantclient_12_2/

CONFIGURATION
    Ora2Pg configuration can be as simple as choosing the Oracle database to
    export and choose the export type. This can be done in a minute.

    By reading this documentation you will also be able to:

            - Select only certain tables and/or column for export.
            - Rename some tables and/or column during export.
            - Select data to export following a WHERE clause per table.
            - Delay database constraints during data loading.
            - Compress exported data to save disk space.
            - and much more.

    The full control of the Oracle database migration is taken though a
    single configuration file named ora2pg.conf. The format of this file
    consist in a directive name in upper case followed by tab character and
    a value. Comments are lines beginning with a #.

    There's no specific order to place the configuration directives, they
    are set at the time they are read in the configuration file.

    For configuration directives that just take a single value, you can use
    them multiple time in the configuration file but only the last
    occurrence found in the file will be used. For configuration directives
    that allow a list of value, you can use it multiple time, the values
    will be appended to the list. If you use the IMPORT directive to load a
    custom configuration file, directives defined in this file will be
    stores from the place the IMPORT directive is found, so it is better to
    put it at the end of the configuration file.

    Values set in command line options will override values from the
    configuration file.

  Ora2Pg usage
    First of all be sure that libraries and binaries path include the Oracle
    Instant Client installation:

            export LD_LIBRARY_PATH=/usr/lib/oracle/11.2/client64/lib
            export PATH="/usr/lib/oracle/11.2/client64/bin:$PATH"

    By default Ora2Pg will look for /etc/ora2pg/ora2pg.conf configuration
    file, if the file exist you can simply execute:

            /usr/local/bin/ora2pg

    or under Windows(tm) run ora2pg.bat file, located in your perl bin
    directory. Windows(tm) users may also find a template configuration file
    in C:\ora2pg

    If you want to call another configuration file, just give the path as
    command line argument:

            /usr/local/bin/ora2pg -c /etc/ora2pg/new_ora2pg.conf

    Here are all command line parameters available when using ora2pg:

    Usage: ora2pg [-dhpqv --estimate_cost --dump_as_html] [--option value]

        -a | --allow str  : Comma separated list of objects to allow from export.
                            Can be used with SHOW_COLUMN too.
        -b | --basedir dir: Set the default output directory, where files
                            resulting from exports will be stored.
        -c | --conf file  : Set an alternate configuration file other than the
                            default /etc/ora2pg/ora2pg.conf.
        -C | --cdc_file file: File used to store/read SCN per table during export.
                            default: TABLES_SCN.log in the current directory. This
                            is the file written by the --cdc_ready option.
        -d | --debug      : Enable verbose output.
        -D | --data_type str : Allow custom type replacement at command line.
        -e | --exclude str: Comma separated list of objects to exclude from export.
                            Can be used with SHOW_COLUMN too.
        -h | --help       : Print this short help.
        -g | --grant_object type : Extract privilege from the given object type.
                            See possible values with GRANT_OBJECT configuration.
        -i | --input file : File containing Oracle PL/SQL code to convert with
                            no Oracle database connection initiated.
        -j | --jobs num   : Number of parallel process to send data to PostgreSQL.
        -J | --copies num : Number of parallel connections to extract data from Oracle.
        -l | --log file   : Set a log file. Default is stdout.
        -L | --limit num  : Number of tuples extracted from Oracle and stored in
                            memory before writing, default: 10000.
        -m | --mysql      : Export a MySQL database instead of an Oracle schema.
        -M | --mssql      : Export a Microsoft SQL Server database.
        -n | --namespace schema : Set the Oracle schema to extract from.
        -N | --pg_schema schema : Set PostgreSQL's search_path.
        -o | --out file   : Set the path to the output file where SQL will
                            be written. Default: output.sql in running directory.
        -p | --plsql      : Enable PLSQL to PLPGSQL code conversion.
        -P | --parallel num: Number of parallel tables to extract at the same time.
        -q | --quiet      : Disable progress bar.
        -r | --relative   : use \ir instead of \i in the psql scripts generated.
        -s | --source DSN : Allow to set the Oracle DBI datasource.
        -S | --scn    SCN : Allow to set the Oracle System Change Number (SCN) to
                            use to export data. It will be used in the WHERE clause
                            to get the data. It is used with action COPY or INSERT.
        -t | --type export: Set the export type. It will override the one
                            given in the configuration file (TYPE).
        -T | --temp_dir dir: Set a distinct temporary directory when two
                            or more ora2pg are run in parallel.
        -u | --user name  : Set the Oracle database connection user.
                            ORA2PG_USER environment variable can be used instead.
        -v | --version    : Show Ora2Pg Version and exit.
        -w | --password pwd : Set the password of the Oracle database user.
                            ORA2PG_PASSWD environment variable can be used instead.
        -W | --where clause : Set the WHERE clause to apply to the Oracle query to
                            retrieve data. Can be used multiple time.
        --forceowner      : Force ora2pg to set tables and sequences owner like in
                      Oracle database. If the value is set to a username this one
                      will be used as the objects owner. By default it's the user
                      used to connect to the Pg database that will be the owner.
        --nls_lang code: Set the Oracle NLS_LANG client encoding.
        --client_encoding code: Set the PostgreSQL client encoding.
        --view_as_table str: Comma separated list of views to export as table.
        --estimate_cost   : Activate the migration cost evaluation with SHOW_REPORT
        --cost_unit_value minutes: Number of minutes for a cost evaluation unit.
                      default: 5 minutes, corresponds to a migration conducted by a
                      PostgreSQL expert. Set it to 10 if this is your first migration.
       --dump_as_html     : Force ora2pg to dump report in HTML, used only with
                            SHOW_REPORT. Default is to dump report as simple text.
       --dump_as_csv      : As above but force ora2pg to dump report in CSV.
       --dump_as_json     : As above but force ora2pg to dump report in JSON.
       --dump_as_sheet    : Report migration assessment with one CSV line per database.
       --init_project name: Initialise a typical ora2pg project tree. Top directory
                            will be created under project base dir.
       --project_base dir : Define the base dir for ora2pg project trees. Default
                            is current directory.
       --print_header     : Used with --dump_as_sheet to print the CSV header
                            especially for the first run of ora2pg.
       --human_days_limit num : Set the number of person-days limit where the migration
                            assessment level switch from B to C. Default is set to
                            5 person-days.
       --audit_user list  : Comma separated list of usernames to filter queries in
                            the DBA_AUDIT_TRAIL table. Used only with SHOW_REPORT
                            and QUERY export type.
       --pg_dsn DSN       : Set the datasource to PostgreSQL for direct import.
       --pg_user name     : Set the PostgreSQL user to use.
       --pg_pwd password  : Set the PostgreSQL password to use.
       --count_rows       : Force ora2pg to perform a real row count in TEST,
                            TEST_COUNT and SHOW_TABLE actions.
       --no_header        : Do not append Ora2Pg header to output file
       --oracle_speed     : Use to know at which speed Oracle is able to send
                            data. No data will be processed or written.
       --ora2pg_speed     : Use to know at which speed Ora2Pg is able to send
                            transformed data. Nothing will be written.
       --blob_to_lo       : export BLOB as large objects, can only be used with
                            action SHOW_COLUMN, TABLE and INSERT.
       --cdc_ready        : use current SCN per table to export data and register
                            them into a file named TABLES_SCN.log per default. It
                            can be changed using -C | --cdc_file.
       --lo_import        : use psql \lo_import command to import BLOB as large
                            object. Can be use to import data with COPY and import
                            large object manually in a second pass. It is recquired
                            for BLOB > 1GB. See documentation for more explanation.
       --mview_as_table str: Comma separated list of materialized views to export
                            as regular table.
       --drop_if_exists   : Drop the object before creation if it exists.
       --delete clause    : Set the DELETE clause to apply to the Oracle query to
                            be applied before importing data. Can be used multiple
                            time.
       --oracle_fdw_prefetch: Set the oracle_fdw prefetch value. Larger values
                            generally result in faster data transfer at the cost
                            of greater memory utilisation at the destination.

    See full documentation at https://ora2pg.darold.net/ for more help or
    see manpage with 'man ora2pg'.

    ora2pg will return 0 on success, 1 on error. It will return 2 when a
    child process has been interrupted and you've gotten the warning
    message: "WARNING: an error occurs during data export. Please check
    what's happen." Most of the time this is an OOM issue, first try
    reducing DATA_LIMIT value.

    For developers, it is possible to add your own custom option(s) in the
    Perl script ora2pg as any configuration directive from ora2pg.conf can
    be passed in lower case to the new Ora2Pg object instance. See ora2pg
    code on how to add your own option.

    Note that performance might be improved by updating stats on oracle:

            BEGIN
            DBMS_STATS.GATHER_SCHEMA_STATS
            DBMS_STATS.GATHER_DATABASE_STATS 
            DBMS_STATS.GATHER_DICTIONARY_STATS
            END;

  Generate a migration template
    The two options --project_base and --init_project when used indicate to
    ora2pg that he has to create a project template with a work tree, a
    configuration file and a script to export all objects from the Oracle
    database. Here a sample of the command usage:

            ora2pg --project_base /app/migration/ --init_project test_project
            Creating project test_project.
            /app/migration/test_project/
                    schema/
                            dblinks/
                            directories/
                            functions/
                            grants/
                            mviews/
                            packages/
                            partitions/
                            procedures/
                            sequences/
                            synonyms/
                            tables/
                            tablespaces/
                            triggers/
                            types/
                            views/
                    sources/
                            functions/
                            mviews/
                            packages/
                            partitions/
                            procedures/
                            triggers/
                            types/
                            views/
                    data/
                    config/
                    reports/

            Generating generic configuration file
            Creating script export_schema.sh to automate all exports.
            Creating script import_all.sh to automate all imports.

    It create a generic config file where you just have to define the Oracle
    database connection and a shell script called export_schema.sh. The
    sources/ directory will contains the Oracle code, the schema/ will
    contains the code ported to PostgreSQL. The reports/ directory will
    contains the html reports with the migration cost assessment.

    If you want to use your own default config file, use the -c option to
    give the path to that file. Rename it with .dist suffix if you want
    ora2pg to apply the generic configuration values otherwise, the
    configuration file will be copied untouched.

    Once you have set the connection to the Oracle Database you can execute
    the script export_schema.sh that will export all object type from your
    Oracle database and output DDL files into the schema's subdirectories.
    At end of the export it will give you the command to export data later
    when the import of the schema will be done and verified.

    You can choose to load the DDL files generated manually or use the
    second script import_all.sh to import those file interactively. If this
    kind of migration is not something current for you it's recommended you
    to use those scripts.

  Oracle database connection
    There's 5 configuration directives to control the access to the Oracle
    database.

    ORACLE_HOME
        Used to set ORACLE_HOME environment variable to the Oracle libraries
        required by the DBD::Oracle Perl module.

    ORACLE_DSN
        This directive is used to set the data source name in the form
        standard DBI DSN. For example:

                dbi:Oracle:host=oradb_host.myhost.com;sid=DB_SID;port=1521

        or

                dbi:Oracle:DB_SID

        On 18c this could be for example:

                dbi:Oracle:host=192.168.1.29;service_name=pdb1;port=1521

        for the second notation the SID should be declared in the well known
        file $ORACLE_HOME/network/admin/tnsnames.ora or in the path given to
        the TNS_ADMIN environment variable.

        For MySQL the DSN will lool like this:

                dbi:mysql:host=192.168.1.10;database=sakila;port=3306

        the 'sid' part is replaced by 'database'.

        For MS SQL Server it will look like this:

                dbi:ODBC:driver=msodbcsql18;server=mydb.database.windows.net;database=testdb;TrustServerCertificate=yes

    ORACLE_USER et ORACLE_PWD
        These two directives are used to define the user and password for
        the Oracle database connection. Note that if you can it is better to
        login as Oracle super admin to avoid grants problem during the
        database scan and be sure that nothing is missing.

        If you do not supply a credential with ORACLE_PWD and you have
        installed the Term::ReadKey Perl module, Ora2Pg will ask for the
        password interactively. If ORACLE_USER is not set it will be asked
        interactively too.

        To connect to a local ORACLE instance with connections "as sysdba"
        you have to set ORACLE_USER to "/" and an empty password.

        To make a connection using an Oracle Secure External Password Store
        (SEPS), first configure the Oracle Wallet and then set both the
        ORACLE_USER and ORACLE_PWD directives to the special value of
        "__SEPS__" (without the quotes but with the double underscore).

    USER_GRANTS
        Set this directive to 1 if you connect the Oracle database as simple
        user and do not have enough grants to extract things from the
        DBA_... tables. It will use tables ALL_... instead.

        Warning: if you use export type GRANT, you must set this
        configuration option to 0 or it will not work.

    TRANSACTION
        This directive may be used if you want to change the default
        isolation level of the data export transaction. Default is now to
        set the level to a serializable transaction to ensure data
        consistency. The allowed values for this directive are:

                readonly: 'SET TRANSACTION READ ONLY',
                readwrite: 'SET TRANSACTION READ WRITE',
                serializable: 'SET TRANSACTION ISOLATION LEVEL SERIALIZABLE'
                committed: 'SET TRANSACTION ISOLATION LEVEL READ COMMITTED',

        Releases before 6.2 used to set the isolation level to READ ONLY
        transaction but in some case this was breaking data consistency so
        now default is set to SERIALIZABLE.

    INPUT_FILE
        This directive did not control the Oracle database connection or
        unless it purely disables the use of any Oracle database by
        accepting a file as argument. Set this directive to a file
        containing PL/SQL Oracle Code like function, procedure or full
        package body to prevent Ora2Pg from connecting to an Oracle database
        and just apply his conversion tool to the content of the file. This
        can be used with the most of export types: TABLE, TRIGGER,
        PROCEDURE, VIEW, FUNCTION or PACKAGE, etc.

    ORA_INITIAL_COMMAND
        This directive can be used to send an initial command to Oracle,
        just after the connection. For example to unlock a policy before
        reading objects or to set some session parameters. This directive
        can be used multiple times.

  Data encryption with Oracle server
    If your Oracle Client config file already includes the encryption
    method, then DBD:Oracle uses those settings to encrypt the connection
    while you extract the data. For example if you have configured the
    Oracle Client config file (sqlnet.ora or .sqlnet) with the following
    information:

            # Configure encryption of connections to Oracle
            SQLNET.ENCRYPTION_CLIENT = required
            SQLNET.ENCRYPTION_TYPES_CLIENT = (AES256, RC4_256)
            SQLNET.CRYPTO_SEED = 'should be 10-70 random characters'

    Any tool that uses the Oracle client to talk to the database will be
    encrypted if you setup session encryption like above.

    For example, Perl's DBI uses DBD-Oracle, which uses the Oracle client
    for actually handling database communication. If the installation of
    Oracle client used by Perl is setup to request encrypted connections,
    then your Perl connection to an Oracle database will also be encrypted.

    Full details at
    https://kb.berkeley.edu/jivekb/entry.jspa?externalID=1005

  Testing connection
    Once you have set the Oracle database DSN you can execute ora2pg to see
    if it works:

            ora2pg -t SHOW_VERSION -c config/ora2pg.conf

    will show the Oracle database server version. Take some time here to
    test your installation as most problems take place here, the other
    configuration steps are more technical.

  Troubleshooting
    If the output.sql file has not exported anything other than the Pg
    transaction header and footer there's two possible reasons. The perl
    script ora2pg dump an ORA-XXX error, that mean that your DSN or login
    information are wrong, check the error and your settings and try again.
    The perl script says nothing and the output file is empty: the user
    lacks permission to extract something from the database. Try to connect
    to Oracle as super user or take a look at directive USER_GRANTS above
    and at next section, especially the SCHEMA directive.

    LOGFILE
        By default all messages are sent to the standard output. If you give
        a file path to that directive, all output will be appended to this
        file.

  Oracle schema to export
    The Oracle database export can be limited to a specific Schema or
    Namespace, this can be mandatory following the database connection user.

    SCHEMA
        This directive is used to set the schema name to use during export.
        For example:

                SCHEMA  APPS

        will extract objects associated to the APPS schema.

        When no schema name is provided and EXPORT_SCHEMA is enabled, Ora2Pg
        will export all objects from all schema of the Oracle instance with
        their names prefixed with the schema name.

    EXPORT_SCHEMA
        By default the Oracle schema is not exported into the PostgreSQL
        database and all objects are created under the default Pg namespace.
        If you want to also export this schema and create all objects under
        this namespace, set the EXPORT_SCHEMA directive to 1. This will set
        the schema search_path at top of export SQL file to the schema name
        set in the SCHEMA directive with the default pg_catalog schema. If
        you want to change this path, use the directive PG_SCHEMA.

    CREATE_SCHEMA
        Enable/disable the CREATE SCHEMA SQL order at starting of the output
        file. It is enable by default and concern on TABLE export type.

    COMPILE_SCHEMA
        By default Ora2Pg will only export valid PL/SQL code. You can force
        Oracle to compile again the invalidated code to get a chance to have
        it obtain the valid status and then be able to export it.

        Enable this directive to force Oracle to compile schema before
        exporting code. When this directive is enabled and SCHEMA is set to
        a specific schema name, only invalid objects in this schema will be
        recompiled. If SCHEMA is not set then all schema will be recompiled.
        To force recompile invalid object in a specific schema, set
        COMPILE_SCHEMA to the schema name you want to recompile.

        This will ask to Oracle to validate the PL/SQL that could have been
        invalidate after a export/import for example. The 'VALID' or
        'INVALID' status applies to functions, procedures, packages and user
        defined types. It also concern disabled triggers.

    EXPORT_INVALID
        If the above configuration directive is not enough to validate your
        PL/SQL code enable this configuration directive to allow export of
        all PL/SQL code even if it is marked as invalid. The 'VALID' or
        'INVALID' status applies to functions, procedures, packages,
        triggers and user defined types.

    PG_SCHEMA
        Allow you to defined/force the PostgreSQL schema to use. By default
        if you set EXPORT_SCHEMA to 1 the PostgreSQL search_path will be set
        to the schema name exported set as value of the SCHEMA directive.

        The value can be a comma delimited list of schema name but not when
        using TABLE export type because in this case it will generate the
        CREATE SCHEMA statement and it doesn't support multiple schema name.
        For example, if you set PG_SCHEMA to something like "user_schema,
        public", the search path will be set like this:

                SET search_path = user_schema, public;

        forcing the use of an other schema (here user_schema) than the one
        from Oracle schema set in the SCHEMA directive.

        You can also set the default search_path for the PostgreSQL user you
        are using to connect to the destination database by using:

                ALTER ROLE username SET search_path TO user_schema, public;

        in this case you don't have to set PG_SCHEMA.

    SYSUSERS
        Without explicit schema, Ora2Pg will export all objects that not
        belongs to system schema or role:

                SYSTEM,CTXSYS,DBSNMP,EXFSYS,LBACSYS,MDSYS,MGMT_VIEW,
                OLAPSYS,ORDDATA,OWBSYS,ORDPLUGINS,ORDSYS,OUTLN,
                SI_INFORMTN_SCHEMA,SYS,SYSMAN,WK_TEST,WKSYS,WKPROXY,
                WMSYS,XDB,APEX_PUBLIC_USER,DIP,FLOWS_020100,FLOWS_030000,
                FLOWS_040100,FLOWS_010600,FLOWS_FILES,MDDATA,ORACLE_OCM,
                SPATIAL_CSW_ADMIN_USR,SPATIAL_WFS_ADMIN_USR,XS$NULL,PERFSTAT,
                SQLTXPLAIN,DMSYS,TSMSYS,WKSYS,APEX_040000,APEX_040200,
                DVSYS,OJVMSYS,GSMADMIN_INTERNAL,APPQOSSYS,DVSYS,DVF,
                AUDSYS,APEX_030200,MGMT_VIEW,ODM,ODM_MTR,TRACESRV,MTMSYS,
                OWBSYS_AUDIT,WEBSYS,WK_PROXY,OSE$HTTP$ADMIN,
                AURORA$JIS$UTILITY$,AURORA$ORB$UNAUTHENTICATED,
                DBMS_PRIVILEGE_CAPTURE,CSMIG,MGDSYS,SDE,DBSFWUSER

        Following your Oracle installation you may have several other system
        role defined. To append these users to the schema exclusion list,
        just set the SYSUSERS configuration directive to a comma-separated
        list of system user to exclude. For example:

                SYSUSERS        INTERNAL,SYSDBA,BI,HR,IX,OE,PM,SH

        will add users INTERNAL and SYSDBA to the schema exclusion list.

    FORCE_OWNER
        By default the owner of the database objects is the one you're using
        to connect to PostgreSQL using the psql command. If you use an other
        user (postgres for example) you can force Ora2Pg to set the object
        owner to be the one used in the Oracle database by setting the
        directive to 1, or to a completely different username by setting the
        directive value to that username.

    FORCE_SECURITY_INVOKER
        Ora2Pg use the function's security privileges set in Oracle and it
        is often defined as SECURITY DEFINER. If you want to override those
        security privileges for all functions and use SECURITY DEFINER
        instead, enable this directive.

    USE_TABLESPACE
        When enabled this directive force ora2pg to export all tables,
        indexes constraint and indexes using the tablespace name defined in
        Oracle database. This works only with tablespace that are not TEMP,
        USERS and SYSTEM.

    WITH_OID
        Activating this directive will force Ora2Pg to add WITH (OIDS) when
        creating tables or views as tables. Default is same as PostgreSQL,
        disabled.

    LOOK_FORWARD_FUNCTION
        List of schema to get functions/procedures meta information that are
        used in the current schema export. When replacing call to function
        with OUT parameters, if a function is declared in an other package
        then the function call rewriting can not be done because Ora2Pg only
        knows about functions declared in the current schema. By setting a
        comma separated list of schema as value of this directive, Ora2Pg
        will look forward in these packages for all
        functions/procedures/packages declaration before proceeding to
        current schema export.

    NO_FUNCTION_METADATA
        Force Ora2Pg to not look for function declaration. Note that this
        will prevent Ora2Pg to rewrite function replacement call if needed.
        Do not enable it unless looking forward at function breaks other
        export.

  Export type
    The export action is perform following a single configuration directive
    'TYPE', some other add more control on what should be really exported.

    TYPE
        Here are the different values of the TYPE directive, default is
        TABLE:

                - TABLE: Extract all tables with indexes, primary keys, unique keys,
                  foreign keys and check constraints.
                - VIEW: Extract only views.
                - GRANT: Extract roles converted to Pg groups, users and grants on all
                  objects.
                - SEQUENCE: Extract all sequence and their last position.
                - TABLESPACE: Extract storage spaces for tables and indexes (Pg >= v8).
                - TRIGGER: Extract triggers defined following actions.
                - FUNCTION: Extract functions.
                - PROCEDURE: Extract procedures.
                - PACKAGE: Extract packages and package bodies.
                - INSERT: Extract data as INSERT statement.
                - COPY: Extract data as COPY statement.
                - PARTITION: Extract range and list Oracle partitions with subpartitions.
                - TYPE: Extract user defined Oracle type.
                - FDW: Export Oracle tables as foreign table for Oracle, MySQL and SQL Server FDW.
                - MVIEW: Export materialized view.
                - QUERY: Try to automatically convert Oracle SQL queries.
                - KETTLE: Generate XML ktr template files to be used by Kettle.
                - DBLINK: Generate oracle foreign data wrapper server to use as dblink.
                - SYNONYM: Export Oracle's synonyms as views on other schema's objects.
                - DIRECTORY: Export Oracle's directories as external_file extension objects.
                - LOAD: Dispatch a list of queries over multiple PostgreSQl connections.
                - TEST: perform a diff between Oracle and PostgreSQL database.
                - TEST_COUNT: perform a row count diff between Oracle and PostgreSQL table.
                - TEST_VIEW: perform a count on both side of number of rows returned by views.
                - TEST_DATA: perform data validation check on rows at both sides.
                - SEQUENCE_VALUES: export DDL to set the last values of sequences

        Only one type of export can be perform at the same time so the TYPE
        directive must be unique. If you have more than one only the last
        found in the file will be registered.

        Some export type can not or should not be load directly into the
        PostgreSQL database and still require little manual editing. This is
        the case for GRANT, TABLESPACE, TRIGGER, FUNCTION, PROCEDURE, TYPE,
        QUERY and PACKAGE export types especially if you have PLSQL code or
        Oracle specific SQL in it.

        For TABLESPACE you must ensure that file path exist on the system
        and for SYNONYM you may ensure that the object's owners and schemas
        correspond to the new PostgreSQL database design.

        Note that you can chained multiple export by giving to the TYPE
        directive a comma-separated list of export type, but in this case
        you must not use COPY or INSERT with other export type.

        Ora2Pg will convert Oracle partition using table inheritance,
        trigger and functions. See document at Pg site:
        http://www.postgresql.org/docs/current/interactive/ddl-partitioning.
        html

        The TYPE export allow export of user defined Oracle type. If you
        don't use the --plsql command line parameter it simply dump Oracle
        user type asis else Ora2Pg will try to convert it to PostgreSQL
        syntax.

        The KETTLE export type requires that the Oracle and PostgreSQL DNS
        are defined.

        Since Ora2Pg v8.1 there's three new export types:

                SHOW_VERSION : display Oracle version
                SHOW_SCHEMA  : display the list of schema available in the database.
                SHOW_TABLE   : display the list of tables available.
                SHOW_COLUMN  : display the list of tables columns available and the
                        Ora2PG conversion type from Oracle to PostgreSQL that will be
                        applied. It will also warn you if there's PostgreSQL reserved
                        words in Oracle object names.

        Here is an example of the SHOW_COLUMN output:

                [2] TABLE CURRENT_SCHEMA (1 rows) (Warning: 'CURRENT_SCHEMA' is a reserved word in PostgreSQL)
                        CONSTRAINT : NUMBER(22) => bigint (Warning: 'CONSTRAINT' is a reserved word in PostgreSQL)
                        FREEZE : VARCHAR2(25) => varchar(25) (Warning: 'FREEZE' is a reserved word in PostgreSQL)
                ...
                [6] TABLE LOCATIONS (23 rows)
                        LOCATION_ID : NUMBER(4) => smallint
                        STREET_ADDRESS : VARCHAR2(40) => varchar(40)
                        POSTAL_CODE : VARCHAR2(12) => varchar(12)
                        CITY : VARCHAR2(30) => varchar(30)
                        STATE_PROVINCE : VARCHAR2(25) => varchar(25)
                        COUNTRY_ID : CHAR(2) => char(2)

        Those extraction keywords are use to only display the requested
        information and exit. This allows you to quickly know on what you
        are going to work.

        The SHOW_COLUMN allow an other ora2pg command line option: '--allow
        relname' or '-a relname' to limit the displayed information to the
        given table.

        The SHOW_ENCODING export type will display the NLS_LANG and
        CLIENT_ENCODING values that Ora2Pg will used and the real encoding
        of the Oracle database with the corresponding client encoding that
        could be used with PostgreSQL

        Ora2Pg allow you to export your Oracle, MySQL or MSSQL table
        definition to be use with the oracle_fdw, mysql_fdw or tds_fdw
        foreign data wrapper. By using type FDW your tables will be exported
        as follow:

                CREATE FOREIGN TABLE oratab (
                        id        integer           NOT NULL,
                        text      character varying(30),
                        floating  double precision  NOT NULL
                ) SERVER oradb OPTIONS (table 'ORATAB');

        Now you can use the table like a regular PostgreSQL table.

        Release 10 adds a new export type destined to evaluate the content
        of the database to migrate, in terms of objects and cost to end the
        migration:

                SHOW_REPORT  : show a detailed report of the Oracle database content.

        Here is a sample of report: http://ora2pg.darold.net/report.html

        There also a more advanced report with migration cost. See the
        dedicated chapter about Migration Cost Evaluation.

    ESTIMATE_COST
        Activate the migration cost evaluation. Must only be used with
        SHOW_REPORT, FUNCTION, PROCEDURE, PACKAGE and QUERY export type.
        Default is disabled. You may want to use the --estimate_cost command
        line option instead to activate this functionality. Note that
        enabling this directive will force PLSQL_PGSQL activation.

    COST_UNIT_VALUE
        Set the value in minutes of the migration cost evaluation unit.
        Default is five minutes per unit. See --cost_unit_value to change
        the unit value at command line.

    DUMP_AS_HTML
        By default when using SHOW_REPORT the migration report is generated
        as simple text, enabling this directive will force ora2pg to create
        a report in HTML format.

        See http://ora2pg.darold.net/report.html for a sample report.

    HUMAN_DAYS_LIMIT
        Use this directive to redefined the number of person-days limit
        where the migration assessment level must switch from B to C.
        Default is set to 10 person-days.

    JOBS
        This configuration directive adds multiprocess support to COPY,
        FUNCTION and PROCEDURE export type, the value is the number of
        process to use. Default is multiprocess disable.

        This directive is used to set the number of cores to used to
        parallelize data import into PostgreSQL. During FUNCTION or
        PROCEDURE export type each function will be translated to plpgsql
        using a new process, the performances gain can be very important
        when you have tons of function to convert.

        There's no limitation in parallel processing than the number of
        cores and the PostgreSQL I/O performance capabilities.

        Doesn't work under Windows Operating System, it is simply disabled.

    ORACLE_COPIES
        This configuration directive adds multiprocess support to extract
        data from Oracle. The value is the number of process to use to
        parallelize the select query. Default is parallel query disable.

        The parallelism is built on splitting the query following of the
        number of cores given as value to ORACLE_COPIES as follow:

                SELECT * FROM MYTABLE WHERE ABS(MOD(COLUMN, ORACLE_COPIES)) = CUR_PROC

        where COLUMN is a technical key like a primary or unique key where
        split will be based and the current core used by the query
        (CUR_PROC). You can also force the column name to use using the
        DEFINED_PK configuration directive.

        Doesn't work under Windows Operating System, it is simply disabled.

    DEFINED_PK
        This directive is used to defined the technical key to used to split
        the query between number of cores set with the ORACLE_COPIES
        variable. For example:

                DEFINED_PK      EMPLOYEES:employee_id

        The parallel query that will be used supposing that -J or
        ORACLE_COPIES is set to 8:

                SELECT * FROM EMPLOYEES WHERE ABS(MOD(employee_id, 8)) = N

        where N is the current process forked starting from 0.

    PARALLEL_TABLES
        This directive is used to defined the number of tables that will be
        processed in parallel for data extraction. The limit is the number
        of cores on your machine. Ora2Pg will open one database connection
        for each parallel table extraction. This directive, when upper than
        1, will invalidate ORACLE_COPIES but not JOBS, so the real number of
        process that will be used is PARALLEL_TABLES * JOBS.

        Note that this directive when set upper that 1 will also
        automatically enable the FILE_PER_TABLE directive if your are
        exporting to files. This is used to export tables and views in
        separate files.

        Use PARALLEL_TABLES to use parallelism with COPY, INSERT and
        TEST_DATA actions. It is also useful with TEST, TEST_COUNT, and
        SHOW_TABLE if --count_rows is used for real row count.

    DEFAULT_PARALLELISM_DEGREE
        You can force Ora2Pg to use /*+ PARALLEL(tbname, degree) */ hint in
        each query used to export data from Oracle by setting a value upper
        than 1 to this directive. A value of 0 or 1 disable the use of
        parallel hint. Default is disabled.

    FDW_SERVER
        This directive is used to set the name of the foreign data server
        that is used in the "CREATE SERVER name FOREIGN DATA WRAPPER
        <fdw_extension> ..." command. This name will then be used in the
        "CREATE FOREIGN TABLE ..." SQL commands and to import data using
        oracle_fdw. Default is no foreign server defined. This only concerns
        export type FDW, COPY and INSERT. For export type FDW the default
        value is orcl.

    FDW_IMPORT_SCHEMA
        Schema where foreign tables for data migration will be created. If
        you use several instances of ora2pg for data migration through the
        foreign data wrapper, you might need to change the name of the
        schema for each instance. Default: ora2pg_fdw_import

    ORACLE_FDW_PREFETCH
        The default behaviour of Ora2Pg is to NOT set the "prefetch" option
        for oracle_fdw when used for COPY and INSERT. This directive allows
        the prefetch to be set. See oracle_fdw documentation for the current
        default.

    ORACLE_FDW_COPY_MODE
        When using Ora2Pg COPY with oracle_fdw it is possible to use two
        different modes: 1) "local", which uses psql on the host running
        Ora2Pg for the "TO" binary stream; 2) "server", which uses
        PostgreSQL server-side COPY for the "TO" binary stream. Both modes
        use psql for the "FROM STDIN BINARY". However, "local" runs the psql
        "FROM STDIN BINARY" on host Ora2Pg is run from, whereas "server"
        runs the psql "FROM STDIN BINARY" on the PostgreSQL server. "local"
        mode should work on any PostgreSQL-based system, including managed
        offerings, which are not expected to support use of "server" mode
        due to permissions. The default is "local" as this is compatible
        with more configurations.

    ORACLE_FDW_COPY_FORMAT
        When using Ora2Pg COPY with oracle_fdw it is possible to use either
        BINARY or CSV data format. BINARY provides better performance,
        however, requires exact data type matching between the FDW and
        destination table. CSV provides greater flexibiliity with respect to
        data type matching: if the FDW and destination data types are
        functionally-compatible the columns can be copied. The default is
        "binary".

    DROP_FOREIGN_SCHEMA
        By default Ora2Pg drops the temporary schema ora2pg_fdw_import used
        to import the Oracle foreign schema before each new import. If you
        want to preserve the existing schema because of modifications or the
        use of a third party server, disable this directive.

    EXTERNAL_TO_FDW
        This directive, enabled by default, allow to export Oracle's
        External Tables as file_fdw foreign tables. To not export these
        tables at all, set the directive to 0.

    INTERNAL_DATE_MAX
        Internal timestamp retrieves from custom type are extracted in the
        following format: 01-JAN-77 12.00.00.000000 AM. It is impossible to
        know the exact century that must be used, so by default any year
        below 49 will be added to 2000 and others to 1900. You can use this
        directive to change the default value 49. this is only relevant if
        you have user defined type with a column timestamp.

    AUDIT_USER
        Set the comma separated list of username that must be used to filter
        queries from the DBA_AUDIT_TRAIL table. Default is to not scan this
        table and to never look for queries. This parameter is used only
        with SHOW_REPORT and QUERY export type with no input file for
        queries. Note that queries will be normalized before output unlike
        when a file is given at input using the -i option or INPUT
        directive.

    FUNCTION_CHECK
        Disable this directive if you want to disable check_function_bodies.

                SET check_function_bodies = false;

        It disables validation of the function body string during CREATE
        FUNCTION. Default is to use de postgresql.conf setting that enable
        it by default.

    ENABLE_BLOB_EXPORT
        Exporting BLOB takes time, in some circumstances you may want to
        export all data except the BLOB columns. In this case disable this
        directive and the BLOB columns will not be included into data
        export. Take care that the target bytea column do not have a NOT
        NULL constraint.

    ENABLE_CLOB_EXPORT
        Same behavior as ENABLE_BLOB_EXPORT but for CLOB.

    DATA_EXPORT_ORDER
        By default data export order will be done by sorting on table name.
        If you have huge tables at end of alphabetic order and you are using
        multiprocess, it can be better to set the sort order on size so that
        multiple small tables can be processed before the largest tables
        finish. In this case set this directive to size. Possible values are
        name and size. Note that export type SHOW_TABLE and SHOW_COLUMN will
        use this sort order too, not only COPY or INSERT export type. If you
        want to give you custom export order, just give a filename as value
        that contains the ordered list of table to export. Must be a list of
        one table per line, in uppercase for Oracle.

  Limiting objects to export
    You may want to export only a part of an Oracle database, here are a set
    of configuration directives that will allow you to control what parts of
    the database should be exported.

    ALLOW
        This directive allows you to set a list of objects on which the
        export must be limited, excluding all other objects in the same type
        of export. The value is a space or comma-separated list of objects
        name to export. You can include valid regex into the list. For
        example:

                ALLOW           EMPLOYEES SALE_.* COUNTRIES .*_GEOM_SEQ

        will export objects with name EMPLOYEES, COUNTRIES, all objects
        beginning with 'SALE_' and all objects with a name ending by
        '_GEOM_SEQ'. The object depends of the export type. Note that regex
        will not works with 8i database, you must use the % placeholder
        instead, Ora2Pg will use the LIKE operator.

        This is the manner to declare global filters that will be used with
        the current export type. You can also use extended filters that will
        be applied on specific objects or only on their related export type.
        For example:

                ora2pg -p -c ora2pg.conf -t TRIGGER -a 'TABLE[employees]'

        will limit export of trigger to those defined on table employees. If
        you want to extract all triggers but not some INSTEAD OF triggers:

                ora2pg -c ora2pg.conf -t TRIGGER -e 'VIEW[trg_view_.*]'

        Or a more complex form:

                ora2pg -p -c ora2pg.conf -t TABLE -a 'TABLE[EMPLOYEES]' \
                        -e 'INDEX[emp_.*];CKEY[emp_salary_min]'

        This command will export the definition of the employee table but
        will exclude all index beginning with 'emp_' and the CHECK
        constraint called 'emp_salary_min'.

        When exporting partition you can exclude some partition tables by
        using

                ora2pg -p -c ora2pg.conf -t PARTITION -e 'PARTITION[PART_199.* PART_198.*]'

        This will exclude partitioned tables for year 1980 to 1999 from the
        export but not the main partition table. The trigger will also be
        adapted to exclude those table.

        With GRANT export you can use this extended form to exclude some
        users from the export or limit the export to some others:

                ora2pg -p -c ora2pg.conf -t GRANT -a 'USER1 USER2'

        or

                ora2pg -p -c ora2pg.conf -t GRANT -a 'GRANT[USER1 USER2]'

        will limit export grants to users USER1 and USER2. But if you don't
        want to export grants on some functions for these users, for
        example:

                ora2pg -p -c ora2pg.conf -t GRANT -a 'USER1 USER2' -e 'FUNCTION[adm_.*];PROCEDURE[adm_.*]'

        Advanced filters may need some learning.

        Oracle doesn't allow the use of lookahead expression so you may want
        to exclude some object that match the ALLOW regexp you have defined.
        For example if you want to export all table starting with E but not
        those starting with EXP it is not possible to do that in a single
        expression. This is why you can start a regular expression with the
        ! character to exclude object matching the regexp given just after.
        Our previous example can be written as follow:

                ALLOW   E.* !EXP.*

        it will be translated into:

                 REGEXP_LIKE(..., '^E.*$') AND NOT REGEXP_LIKE(..., '^EXP.*$')

        in the object search expression.

    EXCLUDE
        This directive is the opposite of the previous, it allow you to
        define a space or comma-separated list of object name to exclude
        from the export. You can include valid regex into the list. For
        example:

                EXCLUDE         EMPLOYEES TMP_.* COUNTRIES

        will exclude object with name EMPLOYEES, COUNTRIES and all tables
        beginning with 'tmp_'.

        For example, you can ban from export some unwanted function with
        this directive:

                EXCLUDE         write_to_.* send_mail_.*

        this example will exclude all functions, procedures or functions in
        a package with the name beginning with those regex. Note that regex
        will not work with 8i database, you must use the % placeholder
        instead, Ora2Pg will use the NOT LIKE operator.

        See above (directive 'ALLOW') for the extended syntax.

    NO_EXCLUDED_TABLE
        By default Ora2Pg exclude from export some Oracle "garbage" tables
        that should never be part of an export. This behavior generates a
        lot of REGEXP_LIKE expressions which are slowing down the export
        when looking at tables. To disable this behavior enable this
        directive, you will have to exclude or clean up later by yourself
        the unwanted tables. The regexp used to exclude the table are
        defined in the array @EXCLUDED_TABLES in lib/Ora2Pg.pm. Note this is
        behavior is independant to the EXCLUDE configuration directive.

    VIEW_AS_TABLE
        Set which view to export as table. By default none. Value must be a
        list of view name or regexp separated by space or comma. If the
        object name is a view and the export type is TABLE, the view will be
        exported as a create table statement. If export type is COPY or
        INSERT, the corresponding data will be exported.

        See chapter "Exporting views as PostgreSQL table" for more details.

    MVIEW_AS_TABLE
        Set which materialized view to export as table. By default none.
        Value must be a list of materialized view name or regexp separated
        by space or comma. If the object name is a materialized view and the
        export type is TABLE, the view will be exported as a create table
        statement. If export type is COPY or INSERT, the corresponding data
        will be exported.

    NO_VIEW_ORDERING
        By default Ora2Pg try to order views to avoid error at import time
        with nested views. With a huge number of views this can take a very
        long time, you can bypass this ordering by enabling this directive.

    GRANT_OBJECT
        When exporting GRANTs you can specify a comma separated list of
        objects for which privilege will be exported. Default is export for
        all objects. Here are the possibles values TABLE, VIEW, MATERIALIZED
        VIEW, SEQUENCE, PROCEDURE, FUNCTION, PACKAGE BODY, TYPE, SYNONYM,
        DIRECTORY. Only one object type is allowed at a time. For example
        set it to TABLE if you just want to export privilege on tables. You
        can use the -g option to overwrite it.

        When used this directive prevent the export of users unless it is
        set to USER. In this case only users definitions are exported.

    WHERE
        This directive allows you to specify a WHERE clause filter when
        dumping the contents of tables. Value is constructs as follows:
        TABLE_NAME[WHERE_CLAUSE], or if you have only one where clause for
        each table just put the where clause as the value. Both are possible
        too. Here are some examples:

                # Global where clause applying to all tables included in the export
                WHERE  1=1

                # Apply the where clause only on table TABLE_NAME
                WHERE  TABLE_NAME[ID1='001']

                # Applies two different clause on tables TABLE_NAME and OTHER_TABLE
                # and a generic where clause on DATE_CREATE to all other tables
                WHERE  TABLE_NAME[ID1='001' OR ID1='002] DATE_CREATE > '2001-01-01' OTHER_TABLE[NAME='test']

        Any where clause not included into a table name bracket clause will
        be applied to all exported table including the tables defined in the
        where clause. These WHERE clauses are very useful if you want to
        archive some data or at the opposite only export some recent data.

        To be able to quickly test data import it is useful to limit data
        export to the first thousand tuples of each table. For Oracle define
        the following clause:

                WHERE   ROWNUM < 1000

        and for MySQL, use the following:

                WHERE   1=1 LIMIT 1,1000

        This can also be restricted to some tables data export.

        Command line option -W or --where will override this directive for
        the global part and per table if the table names is the same.

    TOP_MAX
        This directive is used to limit the number of item shown in the top
        N lists like the top list of tables per number of rows and the top
        list of largest tables in megabytes. By default it is set to 10
        items.

    LOG_ON_ERROR
        Enable this directive if you want to continue direct data import on
        error. When Ora2Pg received an error in the COPY or INSERT statement
        from PostgreSQL it will log the statement to a file called
        TABLENAME_error.log in the output directory and continue to next
        bulk of data. Like this you can try to fix the statement and
        manually reload the error log file. Default is disabled: abort
        import on error.

    REPLACE_QUERY
        Sometime you may want to extract data from an Oracle table but you
        need a custom query for that. Not just a "SELECT * FROM table" like
        Ora2Pg do but a more complex query. This directive allows you to
        overwrite the query used by Ora2Pg to extract data. The format is
        TABLENAME[SQL_QUERY]. If you have multiple table to extract by
        replacing the Ora2Pg query, you can define multiple REPLACE_QUERY
        lines.

                REPLACE_QUERY   EMPLOYEES[SELECT e.id,e.fisrtname,lastname FROM EMPLOYEES e JOIN EMP_UPDT u ON (e.id=u.id AND u.cdate>'2014-08-01 00:00:00')]

  Control of Full Text Search export
    Several directives can be used to control the way Ora2Pg will export the
    Oracle's Text search indexes. By default CONTEXT indexes will be
    exported to PostgreSQL FTS indexes but CTXCAT indexes will be exported
    as indexes using the pg_trgm extension.

    CONTEXT_AS_TRGM
        Force Ora2Pg to translate Oracle Text indexes into PostgreSQL
        indexes using pg_trgm extension. Default is to translate CONTEXT
        indexes into FTS indexes and CTXCAT indexes using pg_trgm. Most of
        the time using pg_trgm is enough, this is why this directive stand
        for. You need to create the pg_trgm extension into the destination
        database before importing the objects:

                CREATE EXTENSION pg_trgm;

    FTS_INDEX_ONLY
        By default Ora2Pg creates a function-based index to translate Oracle
        Text indexes.

                CREATE INDEX ON t_document
                        USING gin(to_tsvector('pg_catalog.french', title));

        You will have to rewrite the CONTAIN() clause using to_tsvector(),
        example:

                SELECT id,title FROM t_document
                        WHERE to_tsvector(title)) @@ to_tsquery('search_word');

        To force Ora2Pg to create an extra tsvector column with a dedicated
        triggers for FTS indexes, disable this directive. In this case,
        Ora2Pg will add the column as follow: ALTER TABLE t_document ADD
        COLUMN tsv_title tsvector; Then update the column to compute FTS
        vectors if data have been loaded before UPDATE t_document SET
        tsv_title = to_tsvector('pg_catalog.french', coalesce(title,'')); To
        automatically update the column when a modification in the title
        column appears, Ora2Pg adds the following trigger:

                CREATE FUNCTION tsv_t_document_title() RETURNS trigger AS $$
                BEGIN
                       IF TG_OP = 'INSERT' OR new.title != old.title THEN
                               new.tsv_title :=
                               to_tsvector('pg_catalog.french', coalesce(new.title,''));
                       END IF;
                       return new;
                END
                $$ LANGUAGE plpgsql;
                CREATE TRIGGER trig_tsv_t_document_title BEFORE INSERT OR UPDATE
                 ON t_document
                 FOR EACH ROW EXECUTE PROCEDURE tsv_t_document_title();

        When the Oracle text index is defined over multiple column, Ora2Pg
        will use setweight() to set a weight in the order of the column
        declaration.

    FTS_CONFIG
        Use this directive to force text search configuration to use. When
        it is not set, Ora2Pg will autodetect the stemmer used by Oracle for
        each index and pg_catalog.english if the information is not found.

    USE_UNACCENT
        If you want to perform your text search in an accent insensitive
        way, enable this directive. Ora2Pg will create an helper function
        over unaccent() and creates the pg_trgm indexes using this function.
        With FTS Ora2Pg will redefine your text search configuration, for
        example:

              CREATE TEXT SEARCH CONFIGURATION fr (COPY = french); 
              ALTER TEXT SEARCH CONFIGURATION fr
                      ALTER MAPPING FOR hword, hword_part, word WITH unaccent, french_stem;

        then set the FTS_CONFIG ora2pg.conf directive to fr instead of
        pg_catalog.english.

        When enabled, Ora2pg will create the wrapper function:

              CREATE OR REPLACE FUNCTION unaccent_immutable(text)
              RETURNS text AS
              $$
                  SELECT public.unaccent('public.unaccent', $1);
              $$ LANGUAGE sql IMMUTABLE
                 COST 1;

        the indexes are exported as follow:

              CREATE INDEX t_document_title_unaccent_trgm_idx ON t_document 
                  USING gin (unaccent_immutable(title) gin_trgm_ops);

        In your queries you will need to use the same function in the search
        to be able to use the function-based index. Example:

                SELECT * FROM t_document
                        WHERE unaccent_immutable(title) LIKE '%donnees%';

    USE_LOWER_UNACCENT
        Same as above but call lower() in the unaccent_immutable() function:

              CREATE OR REPLACE FUNCTION unaccent_immutable(text)
              RETURNS text AS
              $$
                  SELECT lower(public.unaccent('public.unaccent', $1));
              $$ LANGUAGE sql IMMUTABLE;

  Modifying object structure
    One of the great usage of Ora2Pg is its flexibility to replicate Oracle
    database into PostgreSQL database with a different structure or schema.
    There's three configuration directives that allow you to map those
    differences.

    REORDERING_COLUMNS
        Enable this directive to reordering columns and minimized the
        footprint on disc, so that more rows fit on a data page, which is
        the most important factor for speed. Default is disabled, that mean
        the same order than in Oracle tables definition, that's should be
        enough for most usage. This directive is only used with TABLE
        export.

    MODIFY_STRUCT
        This directive allows you to limit the columns to extract for a
        given table. The value consist in a space-separated list of table
        name with a set of column between parenthesis as follow:

                MODIFY_STRUCT   NOM_TABLE(nomcol1,nomcol2,...) ...

        for example:

                MODIFY_STRUCT   T_TEST1(id,dossier) T_TEST2(id,fichier)

        This will only extract columns 'id' and 'dossier' from table T_TEST1
        and columns 'id' and 'fichier' from the T_TEST2 table. This
        directive can only be used with TABLE, COPY or INSERT export. With
        TABLE export create table DDL will respect the new list of columns
        and all indexes or foreign key pointing to or from a column removed
        will not be exported.

    EXCLUDE_COLUMNS
        Instead of redefining the table structure with MODIFY_STRUCT you may
        want to exclude some columns from the table export. The value
        consist in a space-separated list of table name with a set of column
        between parenthesis as follow:

                EXCLUDE_COLUMNS NOM_TABLE(nomcol1,nomcol2,...) ...

        for example:

                EXCLUDE_COLUMNS T_TEST1(id,dossier) T_TEST2(id,fichier)

        This will exclude from the export columns 'id' and 'dossier' from
        table T_TEST1 and columns 'id' and 'fichier' from the T_TEST2 table.
        This directive can only be used with TABLE, COPY or INSERT export.
        With TABLE export create table DDL will respect the new list of
        columns and all indexes or foreign key pointing to or from a column
        removed will not be exported.

    REPLACE_TABLES
        This directive allows you to remap a list of Oracle table name to a
        PostgreSQL table name during export. The value is a list of
        space-separated values with the following structure:

                REPLACE_TABLES  ORIG_TBNAME1:DEST_TBNAME1 ORIG_TBNAME2:DEST_TBNAME2

        Oracle tables ORIG_TBNAME1 and ORIG_TBNAME2 will be respectively
        renamed into DEST_TBNAME1 and DEST_TBNAME2

    REPLACE_COLS
        Like table name, the name of the column can be remapped to a
        different name using the following syntax:

                REPLACE_COLS    ORIG_TBNAME(ORIG_COLNAME1:NEW_COLNAME1,ORIG_COLNAME2:NEW_COLNAME2)

        For example:

                REPLACE_COLS    T_TEST(dico:dictionary,dossier:folder)

        will rename Oracle columns 'dico' and 'dossier' from table T_TEST
        into new name 'dictionary' and 'folder'.

    REPLACE_AS_BOOLEAN
        If you want to change the type of some Oracle columns into
        PostgreSQL boolean during the export you can define here a list of
        tables and column separated by space as follow.

                REPLACE_AS_BOOLEAN     TB_NAME1:COL_NAME1 TB_NAME1:COL_NAME2 TB_NAME2:COL_NAME2

        The values set in the boolean columns list will be replaced with the
        't' and 'f' following the default replacement values and those
        additionally set in directive BOOLEAN_VALUES.

        Note that if you have modified the table name with REPLACE_TABLES
        and/or the column's name, you need to use the name of the original
        table and/or column.

                REPLACE_COLS            TB_NAME1(OLD_COL_NAME1:NEW_COL_NAME1)
                REPLACE_AS_BOOLEAN      TB_NAME1:OLD_COL_NAME1

        You can also give a type and a precision to automatically convert
        all fields of that type as a boolean. For example:

                REPLACE_AS_BOOLEAN      NUMBER:1 CHAR:1 TB_NAME1:COL_NAME1 TB_NAME1:COL_NAME2

        will also replace any field of type number(1) or char(1) as a
        boolean in all exported tables.

    BOOLEAN_VALUES
        Use this to add additional definition of the possible boolean values
        used in Oracle fields. You must set a space-separated list of
        TRUE:FALSE values. By default here are the values recognized by
        Ora2Pg:

                BOOLEAN_VALUES          yes:no y:n 1:0 true:false enabled:disabled

        Any values defined here will be added to the default list.

    REPLACE_ZERO_DATE
        When Ora2Pg find a "zero" date: 0000-00-00 00:00:00 it is replaced
        by a NULL. This could be a problem if your column is defined with
        NOT NULL constraint. If you can not remove the constraint, use this
        directive to set an arbitral date that will be used instead. You can
        also use -INFINITY if you don't want to use a fake date.

    INDEXES_SUFFIX
        Add the given value as suffix to indexes names. Useful if you have
        indexes with same name as tables. For example:

                INDEXES_SUFFIX          _idx

        will add _idx at ed of all index name. Not so common but can help.

    INDEXES_RENAMING
        Enable this directive to rename all indexes using
        tablename_columns_names. Could be very useful for database that have
        multiple time the same index name or that use the same name than a
        table, which is not allowed by PostgreSQL Disabled by default.

    USE_INDEX_OPCLASS
        Operator classes text_pattern_ops, varchar_pattern_ops, and
        bpchar_pattern_ops support B-tree indexes on the corresponding
        types. The difference from the default operator classes is that the
        values are compared strictly character by character rather than
        according to the locale-specific collation rules. This makes these
        operator classes suitable for use by queries involving pattern
        matching expressions (LIKE or POSIX regular expressions) when the
        database does not use the standard "C" locale. If you enable, with
        value 1, this will force Ora2Pg to export all indexes defined on
        varchar2() and char() columns using those operators. If you set it
        to a value greater than 1 it will only change indexes on columns
        where the character limit is greater or equal than this value. For
        example, set it to 128 to create these kind of indexes on columns of
        type varchar2(N) where N >= 128.

    RENAME_PARTITION
        Enable this directive if you want that your partition tables will be
        renamed. Disabled by default. If you have multiple partitioned
        table, when exported to PostgreSQL some partitions could have the
        same name but different parent tables. This is not allowed, table
        name must be unique, in this case enable this directive. A partition
        will be renamed following the rule: "tablename"_part"pos" where
        "pos" is the partition number. For subpartition this is:
        "tablename"_part"pos"_subpart"pos" If this is partition/subpartition
        default: "tablename"_part_default
        "tablename"_part"pos"_subpart_default

    DISABLE_PARTITION
        If you don't want to reproduce the partitioning like in Oracle and
        want to export all partitioned Oracle data into the main single
        table in PostgreSQL enable this directive. Ora2Pg will export all
        data into the main table name. Default is to use partitioning,
        Ora2Pg will export data from each partition and import them into the
        PostgreSQL dedicated partition table.

    PARTITION_BY_REFERENCE
        How to export partition by reference. Possible values are none,
        duplicate or the number of hash partition to create. Default is none
        to not export the partitions by reference.

        Value none mean no translation and export of partition by reference
        like before. Value 'duplicate' will duplicate the referenced column
        in the partitioned table and apply the same partitioning from the
        referenced table to the partitioned table. If the value is a number,
        the table will be partitioned with the HASH method using the value
        as the modulo. For example if you set it to 4 it will create 4 HASH
        partitions.

    DISABLE_UNLOGGED
        By default Ora2Pg export Oracle tables with the NOLOGGING attribute
        as UNLOGGED tables. You may want to fully disable this feature
        because you will lose all data from unlogged tables in case of a
        PostgreSQL crash. Set it to 1 to export all tables as normal tables.

    DOUBLE_MAX_VARCHAR
        Increase varchar max character constraints to support PostgreSQL two
        bytes character encoding when the source database applies the length
        constraint on characters not bytes. Default disabled.

  Oracle Spatial to PostGis
    Ora2Pg fully export Spatial object from Oracle database. There's some
    configuration directives that could be used to control the export.

    AUTODETECT_SPATIAL_TYPE
        By default Ora2Pg is looking at indexes to see the spatial
        constraint type and dimensions defined under Oracle. Those
        constraints are passed as at index creation using for example:

                CREATE INDEX ... INDEXTYPE IS MDSYS.SPATIAL_INDEX
                PARAMETERS('sdo_indx_dims=2, layer_gtype=point');

        If those Oracle constraints parameters are not set, the default is
        to export those columns as generic type GEOMETRY to be able to
        receive any spatial type.

        The AUTODETECT_SPATIAL_TYPE directive allows to force Ora2Pg to
        autodetect the real spatial type and dimension used in a spatial
        column otherwise a non- constrained "geometry" type is used.
        Enabling this feature will force Ora2Pg to scan a sample of 50000
        column to look at the GTYPE used. You can increase or reduce the
        sample size by setting the value of AUTODETECT_SPATIAL_TYPE to the
        desired number of line to scan. The directive is enabled by default.

        For example, in the case of a column named shape and defined with
        Oracle type SDO_GEOMETRY, with AUTODETECT_SPATIAL_TYPE disabled it
        will be converted as:

            shape geometry(GEOMETRY) or shape geometry(GEOMETRYZ, 4326)

        and if the directive is enabled and the column just contains a
        single geometry type that use a single dimension:

            shape geometry(POLYGON, 4326) or shape geometry(POLYGONZ, 4326)

        with a two or three dimensional polygon.

    CONVERT_SRID
        This directive allows you to control the automatically conversion of
        Oracle SRID to standard EPSG. If enabled, Ora2Pg will use the Oracle
        function sdo_cs.map_oracle_srid_to_epsg() to convert all SRID.
        Enabled by default.

        If the SDO_SRID returned by Oracle is NULL, it will be replaced by
        the default value 8307 converted to its EPSG value: 4326 (see
        DEFAULT_SRID).

        If the value is upper than 1, all SRID will be forced to this value,
        in this case DEFAULT_SRID will not be used when Oracle returns a
        null value and the value will be forced to CONVERT_SRID.

        Note that it is also possible to set the EPSG value on Oracle side
        when sdo_cs.map_oracle_srid_to_epsg() return NULL if your want to
        force the value:

          system@db> UPDATE sdo_coord_ref_sys SET legacy_code=41014 WHERE srid = 27572;

    DEFAULT_SRID
        Use this directive to override the default EPSG SRID to used: 4326.
        Can be overwritten by CONVERT_SRID, see above.

    GEOMETRY_EXTRACT_TYPE
        This directive can take three values: WKT (default), WKB and
        INTERNAL. When it is set to WKT, Ora2Pg will use
        SDO_UTIL.TO_WKTGEOMETRY() to extract the geometry data. When it is
        set to WKB, Ora2Pg will use the binary output using
        SDO_UTIL.TO_WKBGEOMETRY(). If those two extract type are calls at
        Oracle side, they are slow and you can easily reach Out Of Memory
        when you have lot of rows. Also WKB is not able to export 3D
        geometry and some geometries like CURVEPOLYGON. In this case you may
        use the INTERNAL extraction type. It will use a Pure Perl library to
        convert the SDO_GEOMETRY data into a WKT representation, the
        translation is done on Ora2Pg side. This is a work in progress,
        please validate your exported data geometries before use. Default
        spatial object extraction type is INTERNAL.

    POSTGIS_SCHEMA
        Use this directive to add a specific schema to the search path to
        look for PostGis functions.

    ST_SRID_FUNCTION
        Oracle function to use to extract the srid from ST_Geometry meta
        information. Default: ST_SRID, for example it should be set to
        sde.st_srid for ArcSDE.

    ST_DIMENSION_FUNCTION
        Oracle function to use to extract the dimension from ST_Geometry
        meta information. Default: ST_DIMENSION, for example it should be
        set to sde.st_dimention for ArcSDE.

    ST_GEOMETRYTYPE_FUNCTION
        Oracle function to use to extract the geometry type from a
        ST_Geometry column Default: ST_GEOMETRYTYPE, for example it should
        be set to sde.st_geometrytype for ArcSDE.

    ST_ASBINARY_FUNCTION
        Oracle function to used to convert an ST_Geometry value into WKB
        format. Default: ST_ASBINARY, for example it should be set to
        sde.st_asbinary for ArcSDE.

    ST_ASTEXT_FUNCTION
        Oracle function to used to convert an ST_Geometry value into WKT
        format. Default: ST_ASTEXT, for example it should be set to
        sde.st_astext for ArcSDE.

  PostgreSQL Import
    By default conversion to PostgreSQL format is written to file
    'output.sql'. The command:

            psql mydb < output.sql

    will import content of file output.sql into PostgreSQL mydb database.

    DATA_LIMIT
        When you are performing INSERT/COPY export Ora2Pg proceed by chunks
        of DATA_LIMIT tuples for speed improvement. Tuples are stored in
        memory before being written to disk, so if you want speed and have
        enough system resources you can grow this limit to an upper value
        for example: 100000 or 1000000. Before release 7.0 a value of 0 mean
        no limit so that all tuples are stored in memory before being
        flushed to disk. In 7.x branch this has been remove and chunk will
        be set to the default: 10000

    BLOB_LIMIT
        When Ora2Pg detect a table with some BLOB it will automatically
        reduce the value of this directive by dividing it by 10 until his
        value is below 1000. You can control this value by setting
        BLOB_LIMIT. Exporting BLOB use lot of resources, setting it to a too
        high value can produce OOM.

    CLOB_AS_BLOB
        Apply same behavior on CLOB than BLOB with BLOB_LIMIT settings. This
        is especially useful if you have large CLOB data. Default: enabled

    OUTPUT
        The Ora2Pg output filename can be changed with this directive.
        Default value is output.sql. if you set the file name with extension
        .gz or .bz2 the output will be automatically compressed. This
        require that the Compress::Zlib Perl module is installed if the
        filename extension is .gz and that the bzip2 system command is
        installed for the .bz2 extension.

    OUTPUT_DIR
        Since release 7.0, you can define a base directory where the file
        will be written. The directory must exists.

    BZIP2
        This directive allows you to specify the full path to the bzip2
        program if it can not be found in the PATH environment variable.

    FILE_PER_CONSTRAINT
        Allow object constraints to be saved in a separate file during
        schema export. The file will be named CONSTRAINTS_OUTPUT, where
        OUTPUT is the value of the corresponding configuration directive.
        You can use .gz xor .bz2 extension to enable compression. Default is
        to save all data in the OUTPUT file. This directive is usable only
        with TABLE export type.

        The constraints can be imported quickly into PostgreSQL using the
        LOAD export type to parallelize their creation over multiple (-j or
        JOBS) connections.

    FILE_PER_INDEX
        Allow indexes to be saved in a separate file during schema export.
        The file will be named INDEXES_OUTPUT, where OUTPUT is the value of
        the corresponding configuration directive. You can use .gz xor .bz2
        file extension to enable compression. Default is to save all data in
        the OUTPUT file. This directive is usable only with TABLE AND
        TABLESPACE export type. With the TABLESPACE export, it is used to
        write "ALTER INDEX ... TABLESPACE ..." into a separate file named
        TBSP_INDEXES_OUTPUT that can be loaded at end of the migration after
        the indexes creation to move the indexes.

        The indexes can be imported quickly into PostgreSQL using the LOAD
        export type to parallelize their creation over multiple (-j or JOBS)
        connections.

    FILE_PER_FKEYS
        Allow foreign key declaration to be saved in a separate file during
        schema export. By default foreign keys are exported into the main
        output file or in the CONSTRAINT_output.sql file. When enabled
        foreign keys will be exported into a file named FKEYS_output.sql

    FILE_PER_TABLE
        Allow data export to be saved in one file per table/view. The files
        will be named as tablename_OUTPUT, where OUTPUT is the value of the
        corresponding configuration directive. You can still use .gz xor
        .bz2 extension in the OUTPUT directive to enable compression.
        Default 0 will save all data in one file, set it to 1 to enable this
        feature. This is usable only during INSERT or COPY export type.

    FILE_PER_FUNCTION
        Allow functions, procedures and triggers to be saved in one file per
        object. The files will be named as objectname_OUTPUT. Where OUTPUT
        is the value of the corresponding configuration directive. You can
        still use .gz xor .bz2 extension in the OUTPUT directive to enable
        compression. Default 0 will save all in one single file, set it to 1
        to enable this feature. This is usable only during the corresponding
        export type, the package body export has a special behavior.

        When export type is PACKAGE and you've enabled this directive,
        Ora2Pg will create a directory per package, named with the lower
        case name of the package, and will create one file per
        function/procedure into that directory. If the configuration
        directive is not enabled, it will create one file per package as
        packagename_OUTPUT, where OUTPUT is the value of the corresponding
        directive.

    TRUNCATE_TABLE
        If this directive is set to 1, a TRUNCATE TABLE instruction will be
        add before loading data. This is usable only during INSERT or COPY
        export type.

        When activated, the instruction will be added only if there's no
        global DELETE clause or not one specific to the current table (see
        below).

    DELETE
        Support for include a DELETE FROM ... WHERE clause filter before
        importing data and perform a delete of some lines instead of
        truncating tables. Value is construct as follow:
        TABLE_NAME[DELETE_WHERE_CLAUSE], or if you have only one where
        clause for all tables just put the delete clause as single value.
        Both are possible too. Here are some examples:

                DELETE  1=1    # Apply to all tables and delete all tuples
                DELETE  TABLE_TEST[ID1='001']   # Apply only on table TABLE_TEST
                DELETE  TABLE_TEST[ID1='001' OR ID1='002] DATE_CREATE > '2001-01-01' TABLE_INFO[NAME='test']

        The last applies two different delete where clause on tables
        TABLE_TEST and TABLE_INFO and a generic delete where clause on
        DATE_CREATE to all other tables. If TRUNCATE_TABLE is enabled it
        will be applied to all tables not covered by the DELETE definition.

        These DELETE clauses might be useful with regular "updates".

    STOP_ON_ERROR
        Set this parameter to 0 to not include the call to \set
        ON_ERROR_STOP ON in all SQL scripts generated by Ora2Pg. By default
        this order is always present so that the script will immediately
        abort when an error is encountered.

    COPY_FREEZE
        Enable this directive to use COPY FREEZE instead of a simple COPY to
        export data with rows already frozen. This is intended as a
        performance option for initial data loading. Rows will be frozen
        only if the table being loaded has been created or truncated in the
        current sub-transaction. This will only work with export to file and
        when -J or ORACLE_COPIES is not set or default to 1. It can be used
        with direct import into PostgreSQL under the same condition but -j
        or JOBS must also be unset or default to 1.

    CREATE_OR_REPLACE
        By default Ora2Pg uses CREATE OR REPLACE in functions and views DDL,
        if you need not to override existing functions or views disable this
        configuration directive, DDL will not include OR REPLACE.

    DROP_IF_EXISTS
        To add a DROP <OBJECT> IF EXISTS before creating the object, enable
        this directive. Can be useful in an iterative work. Default is
        disabled.

    EXPORT_GTT
        PostgreSQL do not supports Global Temporary Table natively but you
        can use the pgtt extension to emulate this behavior. Enable this
        directive to export global temporary table.

    PGTT_NOSUPERUSER
        By default the pgtt extension is loaded using the superuser
        privilege. Enabled it if you run the SQL scripts generated using a
        non superuser user. It will use:

            LOAD '$libdir/plugins/pgtt';

        instead of default:

            LOAD 'pgtt';

    NO_HEADER
        Enabling this directive will prevent Ora2Pg to print his header into
        output files. Only the translated code will be written.

    PSQL_RELATIVE_PATH
        By default Ora2Pg use \i psql command to execute generated SQL files
        if you want to use a relative path following the script execution
        file enabling this option will use \ir. See psql help for more
        information.

    DATA_VALIDATION_ROWS
        Number of rows that must be retrieved on both side for data
        validation. Default it to compare the 10000 first rows. A value of 0
        mean compare all rows.

    DATA_VALIDATION_ORDERING
        Order of rows between both sides are different once the data have
        been modified. In this case data must be ordered using a primary key
        or a unique index, that mean that a table without such object can
        not be compared. If the validation is done just after the data
        migration without any data modification the validation can be done
        on all tables without any ordering.

    DATA_VALIDATION_ERROR
        Stop validating data from a table after a certain amount of row
        mistmatch. Default is to stop after 10 rows validation errors.

    TRANSFORM_VALUE
        Use this directive to precise which transformation should be applied
        to a column when exporting data. Value must be a semicolon separated
        list of

           TABLE[COLUMN_NAME, <replace code in SELECT target list>]

        For example to replace string 'Oracle' by 'PostgreSQL' in a varchar2
        column use the following.

           TRANSFORM_VALUE   ERROR_LOG_SAMPLE[DBMS_TYPE:regexp_replace("DBMS_TYPE",'Oracle','PostgreSQL')]

        or to replace all Oracle char(0) in a string by a space character:

            TRANSFORM_VALUE   CLOB_TABLE[CHARDATA:translate("CHARDATA", chr(0), ' ')]

        The expression will be applied in the SQL statemeent used to extract
        data from the source database.

    When using Ora2Pg export type INSERT or COPY to dump data to file and
    that FILE_PER_TABLE is enabled, you will be warned that Ora2Pg will not
    export data again if the file already exists. This is to prevent
    downloading twice table with huge amount of data. To force the download
    of data from these tables you have to remove the existing output file
    first.

    If you want to import data on the fly to the PostgreSQL database you
    have three configuration directives to set the PostgreSQL database
    connection. This is only possible with COPY or INSERT export type as for
    database schema there's no real interest to do that.

    PG_DSN
        Use this directive to set the PostgreSQL data source namespace using
        DBD::Pg Perl module as follow:

                dbi:Pg:dbname=pgdb;host=localhost;port=5432

        will connect to database 'pgdb' on localhost at tcp port 5432.

        Note that this directive is only used for data export, other export
        need to be imported manually through the use og psql or any other
        PostgreSQL client.

        To use SSL encrypted connection you must add sslmode=require to the
        connection string like follow:

                dbi:Pg:dbname=pgdb;host=localhost;port=5432;sslmode=require

    PG_USER and PG_PWD
        These two directives are used to set the login user and password.

        If you do not supply a credential with PG_PWD and you have installed
        the Term::ReadKey Perl module, Ora2Pg will ask for the password
        interactively. If PG_USER is not set it will be asked interactively
        too.

    SYNCHRONOUS_COMMIT
        Specifies whether transaction commit will wait for WAL records to be
        written to disk before the command returns a "success" indication to
        the client. This is the equivalent to set synchronous_commit
        directive of postgresql.conf file. This is only used when you load
        data directly to PostgreSQL, the default is off to disable
        synchronous commit to gain speed at writing data. Some modified
        version of PostgreSQL, like greenplum, do not have this setting, so
        in this set this directive to 1, ora2pg will not try to change the
        setting.

    PG_INITIAL_COMMAND
        This directive can be used to send an initial command to PostgreSQL,
        just after the connection. For example to set some session
        parameters. This directive can be used multiple times.

    INSERT_ON_CONFLICT
        When enabled this instruct Ora2Pg to add an ON CONFLICT DO NOTHING
        clause to all INSERT statements generated for this type of data
        export.

  Column type control
    PG_NUMERIC_TYPE
        If set to 1 replace portable numeric type into PostgreSQL internal
        type. Oracle data type NUMBER(p,s) is approximatively converted to
        real and float PostgreSQL data type. If you have monetary fields or
        don't want rounding issues with the extra decimals you should
        preserve the same numeric(p,s) PostgreSQL data type. Do that only if
        you need exactness because using numeric(p,s) is slower than using
        real or double.

    PG_INTEGER_TYPE
        If set to 1 replace portable numeric type into PostgreSQL internal
        type. Oracle data type NUMBER(p) or NUMBER are converted to
        smallint, integer or bigint PostgreSQL data type following the value
        of the precision. If NUMBER without precision are set to
        DEFAULT_NUMERIC (see below).

    DEFAULT_NUMERIC
        NUMBER without precision are converted by default to bigint only if
        PG_INTEGER_TYPE is true. You can overwrite this value to any PG
        type, like integer or float.

    DATA_TYPE
        If you're experiencing any problem in data type schema conversion
        with this directive you can take full control of the correspondence
        between Oracle and PostgreSQL types to redefine data type
        translation used in Ora2pg. The syntax is a comma-separated list of
        "Oracle datatype:Postgresql datatype". Here are the default list
        used:

                DATA_TYPE       VARCHAR2:varchar,NVARCHAR2:varchar,NVARCHAR:varchar,NCHAR:char,DATE:timestamp(0),LONG:text,LONG RAW:bytea,CLOB:text,NCLOB:text,BLOB:bytea,BFILE:bytea,RAW(16):uuid,RAW(32):uuid,RAW:bytea,UROWID:oid,ROWID:oid,FLOAT:double precision,DEC:decimal,DECIMAL:decimal,DOUBLE PRECISION:double precision,INT:integer,INTEGER:integer,REAL:real,SMALLINT:smallint,BINARY_FLOAT:double precision,BINARY_DOUBLE:double precision,TIMESTAMP:timestamp,XMLTYPE:xml,BINARY_INTEGER:integer,PLS_INTEGER:integer,TIMESTAMP WITH TIME ZONE:timestamp with time zone,TIMESTAMP WITH LOCAL TIME ZONE:timestamp with time zone

        The directive and the list definition must be a single line.

        Note that when a RAW(16) and RAW(32) columns is found or that the
        RAW column has "SYS_GUID()" as default value Ora2Pg will
        automatically translate the type of the column into uuid which might
        be the right translation in most of the case. In this case data will
        be automatically migrated as PostgreSQL uuid data type provided by
        the "uuid-ossp" extension.

        If you want to replace a type with a precision and scale you need to
        escape the coma with a backslash. For example, if you want to
        replace all NUMBER(*,0) into bigint instead of numeric(38) add the
        following:

               DATA_TYPE       NUMBER(*\,0):bigint

        You don't have to recopy all default type conversion but just the
        one you want to rewrite.

        There's a special case with BFILE when they are converted to type
        TEXT, they will just contains the full path to the external file. If
        you set the destination type to BYTEA, the default, Ora2Pg will
        export the content of the BFILE as bytea. The third case is when you
        set the destination type to EFILE, in this case, Ora2Pg will export
        it as an EFILE record: (DIRECTORY, FILENAME). Use the DIRECTORY
        export type to export the existing directories as well as privileges
        on those directories.

        There's no SQL function available to retrieve the path to the BFILE.
        Ora2Pg have to create one using the DBMS_LOB package.

                CREATE OR REPLACE FUNCTION ora2pg_get_bfilename( p_bfile IN BFILE )
                RETURN VARCHAR2
                AS
                    l_dir   VARCHAR2(4000);
                    l_fname VARCHAR2(4000);
                    l_path  VARCHAR2(4000);
                BEGIN
                    dbms_lob.FILEGETNAME( p_bfile, l_dir, l_fname );
                    SELECT directory_path INTO l_path FROM all_directories
                        WHERE directory_name = l_dir;
                    l_dir := rtrim(l_path,'/');
                    RETURN l_dir || '/' || l_fname;
                END;

        This function is only created if Ora2Pg found a table with a BFILE
        column and that the destination type is TEXT. The function is
        dropped at the end of the export. This concern both, COPY and INSERT
        export type.

        There's no SQL function available to retrieve BFILE as an EFILE
        record, then Ora2Pg have to create one using the DBMS_LOB package.

                CREATE OR REPLACE FUNCTION ora2pg_get_efile( p_bfile IN BFILE )
                RETURN VARCHAR2
                AS
                    l_dir   VARCHAR2(4000);
                    l_fname VARCHAR2(4000);
                BEGIN
                    dbms_lob.FILEGETNAME( p_bfile, l_dir, l_fname );
                    RETURN '(' || l_dir || ',' || l_fnamei || ')';
                END;

        This function is only created if Ora2Pg found a table with a BFILE
        column and that the destination type is EFILE. The function is
        dropped at the end of the export. This concern both, COPY and INSERT
        export type.

        To set the destination type, use the DATA_TYPE configuration
        directive:

                DATA_TYPE       BFILE:EFILE

        for example.

        The EFILE type is a user defined type created by the PostgreSQL
        extension external_file that can be found here:
        https://github.com/darold/external_file This is a port of the BFILE
        Oracle type to PostgreSQL.

        There's no SQL function available to retrieve the content of a
        BFILE. Ora2Pg have to create one using the DBMS_LOB package.

                CREATE OR REPLACE FUNCTION ora2pg_get_bfile( p_bfile IN BFILE ) RETURN
                BLOB
                  AS
                        filecontent BLOB := NULL;
                        src_file BFILE := NULL;
                        l_step PLS_INTEGER := 12000;
                        l_dir   VARCHAR2(4000);
                        l_fname VARCHAR2(4000);
                        offset NUMBER := 1;
                  BEGIN
                    IF p_bfile IS NULL THEN
                      RETURN NULL;
                    END IF;

                    DBMS_LOB.FILEGETNAME( p_bfile, l_dir, l_fname );
                    src_file := BFILENAME( l_dir, l_fname );
                    IF src_file IS NULL THEN
                        RETURN NULL;
                    END IF;

                    DBMS_LOB.FILEOPEN(src_file, DBMS_LOB.FILE_READONLY);
                    DBMS_LOB.CREATETEMPORARY(filecontent, true);
                    DBMS_LOB.LOADBLOBFROMFILE (filecontent, src_file, DBMS_LOB.LOBMAXSIZE, offset, offset);
                    DBMS_LOB.FILECLOSE(src_file);
                    RETURN filecontent;
                END;

        This function is only created if Ora2Pg found a table with a BFILE
        column and that the destination type is bytea (the default). The
        function is dropped at the end of the export. This concern both,
        COPY and INSERT export type.

        About the ROWID and UROWID, they are converted into OID by "logical"
        default but this will through an error at data import. There is no
        equivalent data type so you might want to use the DATA_TYPE
        directive to change the corresponding type in PostgreSQL. You should
        consider replacing this data type by a bigserial (autoincremented
        sequence), text or uuid data type.

    MODIFY_TYPE
        Sometimes you need to force the destination type, for example a
        column exported as timestamp by Ora2Pg can be forced into type date.
        Value is a comma-separated list of TABLE:COLUMN:TYPE structure. If
        you need to use comma or space inside type definition you will have
        to backslash them.

                MODIFY_TYPE     TABLE1:COL3:varchar,TABLE1:COL4:decimal(9\,6)

        Type of table1.col3 will be replaced by a varchar and table1.col4 by
        a decimal with precision and scale.

        If the column's type is a user defined type Ora2Pg will autodetect
        the composite type and will export its data using ROW(). Some Oracle
        user defined types are just array of a native type, in this case you
        may want to transform this column in simple array of a PostgreSQL
        native type. To do so, just redefine the destination type as wanted
        and Ora2Pg will also transform the data as an array. For example,
        with the following definition in Oracle:

                CREATE OR REPLACE TYPE mem_type IS VARRAY(10) of VARCHAR2(15);
                CREATE TABLE club (Name VARCHAR2(10),
                        Address VARCHAR2(20),
                        City VARCHAR2(20),
                        Phone VARCHAR2(8),
                        Members mem_type
                );

        custom type "mem_type" is just a string array and can be translated
        into the following in PostgreSQL:

                CREATE TABLE club (
                        name varchar(10),
                        address varchar(20),
                        city varchar(20),
                        phone varchar(8),
                        members text[]
                ) ;

        To do so, just use the directive as follow:

                MODIFY_TYPE     CLUB:MEMBERS:text[]

        Ora2Pg will take care to transform all data of this column in the
        correct format. Only arrays of characters and numerics types are
        supported.

    TO_NUMBER_CONVERSION
        By default Oracle call to function TO_NUMBER will be translated as a
        cast into numeric. For example, TO_NUMBER('10.1234') is converted
        into PostgreSQL call to_number('10.1234')::numeric. If you want you
        can cast the call to integer or bigint by changing the value of the
        configuration directive. If you need better control of the format,
        just set it as value, for example: TO_NUMBER_CONVERSION
        99999999999999999999.9999999999 will convert the code above as:
        TO_NUMBER('10.1234', '99999999999999999999.9999999999') Any value of
        the directive that it is not numeric, integer or bigint will be
        taken as a mask format. If set to none, no conversion will be done.

    VARCHAR_TO_TEXT
        By default varchar2 without size constraint are tranlated into text.
        If you want to keep the varchar name, disable this directive.

    FORCE_IDENTITY_BIGINT
        Usually identity column must be bigint to correspond to an auto
        increment sequence so Ora2Pg always force it to be a bigint. If, for
        any reason you want Ora2Pg to respect the DATA_TYPE you have set for
        identity column then disable this directive.

    TO_CHAR_NOTIMEZONE
        If you want Ora2Pg to remove any timezone information into the
        format part of the TO_CHAR() function, enable this directive.
        Disabled by default.

  Taking export under control
    The following other configuration directives interact directly with the
    export process and give you fine granularity in database export control.

    SKIP
        For TABLE export you may not want to export all schema constraints,
        the SKIP configuration directive allows you to specify a
        space-separated list of constraints that should not be exported.
        Possible values are:

                - fkeys: turn off foreign key constraints
                - pkeys: turn off primary keys
                - ukeys: turn off unique column constraints
                - indexes: turn off all other index types
                - checks: turn off check constraints

        For example:

                SKIP    indexes,checks

        will removed indexes and check constraints from export.

    PKEY_IN_CREATE
        Enable this directive if you want to add primary key definition
        inside the create table statement. If disabled (the default) primary
        key definition will be added with an alter table statement. Enable
        it if you are exporting to GreenPlum PostgreSQL database.

    KEEP_PKEY_NAMES
        By default names of the primary and unique key in the source Oracle
        database are ignored and key names are autogenerated in the target
        PostgreSQL database with the PostgreSQL internal default naming
        rules. If you want to preserve Oracle primary and unique key names
        set this option to 1.

    FKEY_ADD_UPDATE
        This directive allows you to add an ON UPDATE CASCADE option to a
        foreign key when a ON DELETE CASCADE is defined or always. Oracle do
        not support this feature, you have to use trigger to operate the ON
        UPDATE CASCADE. As PostgreSQL has this feature, you can choose how
        to add the foreign key option. There are three values to this
        directive: never, the default that mean that foreign keys will be
        declared exactly like in Oracle. The second value is delete, that
        mean that the ON UPDATE CASCADE option will be added only if the ON
        DELETE CASCADE is already defined on the foreign Keys. The last
        value, always, will force all foreign keys to be defined using the
        update option.

    FKEY_DEFERRABLE
        When exporting tables, Ora2Pg normally exports constraints as they
        are, if they are non-deferrable they are exported as non-deferrable.
        However, non-deferrable constraints will probably cause problems
        when attempting to import data to Pg. The FKEY_DEFERRABLE option set
        to 1 will cause all foreign key constraints to be exported as
        deferrable.

    DEFER_FKEY
        In addition to exporting data when the DEFER_FKEY option set to 1,
        it will add a command to defer all foreign key constraints during
        data export and the import will be done in a single transaction.
        This will work only if foreign keys have been exported as deferrable
        and you are not using direct import to PostgreSQL (PG_DSN is not
        defined). Constraints will then be checked at the end of the
        transaction.

        This directive can also be enabled if you want to force all foreign
        keys to be created as deferrable and initially deferred during
        schema export (TABLE export type).

    DROP_FKEY
        If deferring foreign keys is not possible due to the amount of data
        in a single transaction, you've not exported foreign keys as
        deferrable or you are using direct import to PostgreSQL, you can use
        the DROP_FKEY directive.

        It will drop all foreign keys before all data import and recreate
        them at the end of the import.

    DROP_INDEXES
        This directive allows you to gain lot of speed improvement during
        data import by removing all indexes that are not an automatic index
        (indexes of primary keys) and recreate them at the end of data
        import. Of course it is far better to not import indexes and
        constraints before having imported all data.

    DISABLE_TRIGGERS
        This directive is used to disable triggers on all tables in COPY or
        INSERT export modes. Available values are USER (disable user-defined
        triggers only) and ALL (includes RI system triggers). Default is 0:
        do not add SQL statements to disable trigger before data import.

        If you want to disable triggers during data migration, set the value
        to USER if your are connected as non superuser and ALL if you are
        connected as PostgreSQL superuser. A value of 1 is equal to USER.

    DISABLE_SEQUENCE
        If set to 1 it disables alter of sequences on all tables during COPY
        or INSERT export mode. This is used to prevent the update of
        sequence during data migration. Default is 0, alter sequences.

    NOESCAPE
        By default all data that are not of type date or time are escaped.
        If you experience any problem with that you can set it to 1 to
        disable character escaping during data export. This directive is
        only used during a COPY export. See STANDARD_CONFORMING_STRINGS for
        enabling/disabling escape with INSERT statements.

    STANDARD_CONFORMING_STRINGS
        This controls whether ordinary string literals ('...') treat
        backslashes literally, as specified in SQL standard. This was the
        default before Ora2Pg v8.5 so that all strings was escaped first,
        now this is currently on, causing Ora2Pg to use the escape string
        syntax (E'...') if this parameter is not set to 0. This is the exact
        behavior of the same option in PostgreSQL. This directive is only
        used during data export to build INSERT statements. See NOESCAPE for
        enabling/disabling escape in COPY statements.

    TRIM_TYPE
        If you want to convert CHAR(n) from Oracle into varchar(n) or text
        on PostgreSQL using directive DATA_TYPE, you might want to do some
        trimming on the data. By default Ora2Pg will auto-detect this
        conversion and remove any whitespace at both leading and trailing
        position. If you just want to remove the leadings character set the
        value to LEADING. If you just want to remove the trailing character,
        set the value to TRAILING. Default value is BOTH.

    TRIM_CHAR
        The default trimming character is space, use this directive if you
        need to change the character that will be removed. For example, set
        it to - if you have leading - in the char(n) field. To use space as
        trimming charger, comment this directive, this is the default value.

    PRESERVE_CASE
        If you want to preserve the case of Oracle object name set this
        directive to 1. By default Ora2Pg will convert all Oracle object
        names to lower case. I do not recommend to enable this unless you
        will always have to double-quote object names on all your SQL
        scripts.

    ORA_RESERVED_WORDS
        Allow escaping of column name using Oracle reserved words. Value is
        a list of comma-separated reserved word. Default:
        audit,comment,references.

    USE_RESERVED_WORDS
        Enable this directive if you have table or column names that are a
        reserved word for PostgreSQL. Ora2Pg will double quote the name of
        the object.

    GEN_USER_PWD
        Set this directive to 1 to replace default password by a random
        password for all extracted user during a GRANT export.

    PG_SUPPORTS_MVIEW
        Since PostgreSQL 9.3, materialized view are supported with the SQL
        syntax 'CREATE MATERIALIZED VIEW'. To force Ora2Pg to use the native
        PostgreSQL support you must enable this configuration - enable by
        default. If you want to use the old style with table and a set of
        function, you should disable it.

    PG_SUPPORTS_IFEXISTS
        PostgreSQL version below 9.x do not support IF EXISTS in DDL
        statements. Disabling the directive with value 0 will prevent Ora2Pg
        to add those keywords in all generated statements. Default value is
        1, enabled.

    PG_VERSION
        Set the PostgreSQL major version number of the target database. Ex:
        9.6 or 13 Default is current major version at time of a new release.
        This replace the old and deprecadted PG_SUPPORTS_* configuration
        directives described bellow.

    PG_SUPPORTS_ROLE (Deprecated)
        This option is deprecated since Ora2Pg release v7.3.

        By default Oracle roles are translated into PostgreSQL groups. If
        you have PostgreSQL 8.1 or more consider the use of ROLES and set
        this directive to 1 to export roles.

    PG_SUPPORTS_INOUT (Deprecated)
        This option is deprecated since Ora2Pg release v7.3.

        If set to 0, all IN, OUT or INOUT parameters will not be used into
        the generated PostgreSQL function declarations (disable it for
        PostgreSQL database version lower than 8.1), This is now enable by
        default.

    PG_SUPPORTS_DEFAULT
        This directive enable or disable the use of default parameter value
        in function export. Until PostgreSQL 8.4 such a default value was
        not supported, this feature is now enable by default.

    PG_SUPPORTS_WHEN (Deprecated)
        Add support to WHEN clause on triggers as PostgreSQL v9.0 now
        support it. This directive is enabled by default, set it to 0
        disable this feature.

    PG_SUPPORTS_INSTEADOF (Deprecated)
        Add support to INSTEAD OF usage on triggers (used with PG >= 9.1),
        if this directive is disabled the INSTEAD OF triggers will be
        rewritten as Pg rules.

    PG_SUPPORTS_CHECKOPTION
        When enabled, export views with CHECK OPTION. Disable it if you have
        PostgreSQL version prior to 9.4. Default: 1, enabled.

    PG_SUPPORTS_IFEXISTS
        If disabled, do not export object with IF EXISTS statements. Enabled
        by default.

    PG_SUPPORTS_PARTITION
        PostgreSQL version prior to 10.0 do not have native partitioning.
        Enable this directive if you want to use declarative partitioning.
        Enable by default.

    PG_SUPPORTS_SUBSTR
        Some versions of PostgreSQL like Redshift doesn't support substr()
        and it need to be replaced by a call to substring(). In this case,
        disable it.

    PG_SUPPORTS_NAMED_OPERATOR
        Disable this directive if you are using PG < 9.5, PL/SQL operator
        used in named parameter => will be replaced by PostgreSQL
        proprietary operator := Enable by default.

    PG_SUPPORTS_IDENTITY
        Enable this directive if you have PostgreSQL >= 10 to use IDENTITY
        columns instead of serial or bigserial data type. If
        PG_SUPPORTS_IDENTITY is disabled and there is IDENTITY column in the
        Oracle table, they are exported as serial or bigserial columns. When
        it is enabled they are exported as IDENTITY columns like:

              CREATE TABLE identity_test_tab (
                      id bigint GENERATED ALWAYS AS IDENTITY,
                      description varchar(30)
              ) ;

        If there is non default sequence options set in Oracle, they will be
        appended after the IDENTITY keyword. Additionally in both cases,
        Ora2Pg will create a file AUTOINCREMENT_output.sql with a embedded
        function to update the associated sequences with the restart value
        set to "SELECT max(colname)+1 FROM tablename". Of course this file
        must be imported after data import otherwise sequence will be kept
        to start value. Enabled by default.

    PG_SUPPORTS_PROCEDURE
        PostgreSQL v11 adds support of PROCEDURE, enable it if you use such
        version.

    BITMAP_AS_GIN
        Use btree_gin extension to create bitmap like index with pg >= 9.4
        You will need to create the extension by yourself: create extension
        btree_gin; Default is to create GIN index, when disabled, a btree
        index will be created

    PG_BACKGROUND
        Use pg_background extension to create an autonomous transaction
        instead of using a dblink wrapper. With pg >= 9.5 only. Default is
        to use dblink. See https://github.com/vibhorkum/pg_background about
        this extension.

    DBLINK_CONN
        By default if you have an autonomous transaction translated using
        dblink extension instead of pg_background the connection is defined
        using the values set with PG_DSN, PG_USER and PG_PWD. If you want to
        fully override the connection string use this directive as follow to
        set the connection in the autonomous transaction wrapper function.
        For example:

                DBLINK_CONN    port=5432 dbname=pgdb host=localhost user=pguser password=pgpass

    LONGREADLEN
        Use this directive to set the database handle's 'LongReadLen'
        attribute to a value that will be the larger than the expected size
        of the LOBs. The default is 1MB witch may not be enough to extract
        BLOBs or CLOBs. If the size of the LOB exceeds the 'LongReadLen'
        DBD::Oracle will return a 'ORA-24345: A Truncation' error. Default:
        1023*1024 bytes.

        Take a look at this page to learn more:
        http://search.cpan.org/~pythian/DBD-Oracle-1.22/Oracle.pm#Data_Inter
        face_for_Persistent_LOBs

        Important note: If you increase the value of this directive take
        care that DATA_LIMIT will probably needs to be reduced. Even if you
        only have a 1MB blob, trying to read 10000 of them (the default
        DATA_LIMIT) all at once will require 10GB of memory. You may extract
        data from those table separately and set a DATA_LIMIT to 500 or
        lower, otherwise you may experience some out of memory.

    LONGTRUNKOK
        If you want to bypass the 'ORA-24345: A Truncation' error, set this
        directive to 1, it will truncate the data extracted to the
        LongReadLen value. Disable by default so that you will be warned if
        your LongReadLen value is not high enough.

    USE_LOB_LOCATOR
        Disable this if you want to load full content of BLOB and CLOB and
        not use LOB locators. In this case you will have to set LONGREADLEN
        to the right value. Note that this will not improve speed of BLOB
        export as most of the time is always consumed by the bytea escaping
        and in this case export is done line by line and not by chunk of
        DATA_LIMIT rows. For more information on how it works, see
        http://search.cpan.org/~pythian/DBD-Oracle-1.74/lib/DBD/Oracle.pm#Da
        ta_Interface_for_LOB_Locators

        Default is enabled, it use LOB locators.

    LOB_CHUNK_SIZE
        Oracle recommends reading from and writing to a LOB in batches using
        a multiple of the LOB chunk size. This chunk size defaults to 8k
        (8192). Recent tests shown that the best performances can be reach
        with higher value like 512K or 4Mb.

        A quick benchmark with 30120 rows with different size of BLOB
        (200x5Mb, 19800x212k, 10000x942K, 100x17Mb, 20x156Mb), with
        DATA_LIMIT=100, LONGREADLEN=170Mb and a total table size of 20GB
        gives:

               no lob locator  : 22m46,218s (1365 sec., avg: 22 recs/sec)
               chunk size 8k   : 15m50,886s (951 sec., avg: 31 recs/sec)
               chunk size 512k : 1m28,161s (88 sec., avg: 342 recs/sec)
               chunk size 4Mb  : 1m23,717s (83 sec., avg: 362 recs/sec)

        In conclusion it can be more than 10 time faster with LOB_CHUNK_SIZE
        set to 4Mb. Depending of the size of most BLOB you may want to
        adjust the value here. For example if you have a majority of small
        lobs bellow 8K, using 8192 is better to not waste space. Default
        value for LOB_CHUNK_SIZE is 512000.

    XML_PRETTY
        Force the use getStringVal() instead of getClobVal() for XML data
        export. Default is 1, enabled for backward compatibility. Set it to
        0 to use extract method a la CLOB. Note that XML value extracted
        with getStringVal() must not exceed VARCHAR2 size limit (4000)
        otherwise it will return an error.

    ENABLE_MICROSECOND
        Set it to O if you want to disable export of millisecond from Oracle
        timestamp columns. By default milliseconds are exported with the use
        of following format:

                'YYYY-MM-DD HH24:MI:SS.FF'

        Disabling will force the use of the following Oracle format:

                to_char(..., 'YYYY-MM-DD HH24:MI:SS')

        By default milliseconds are exported.

    DISABLE_COMMENT
        Set this to 1 if you don't want to export comment associated to
        tables and columns definition. Default is enabled.

  Control MySQL export behavior
    MYSQL_PIPES_AS_CONCAT
        Enable this if double pipe and double ampersand (|| and &&) should
        not be taken as equivalent to OR and AND. It depend of the variable
        @sql_mode, Use it only if Ora2Pg fail on auto detecting this
        behavior.

    MYSQL_INTERNAL_EXTRACT_FORMAT
        Enable this directive if you want EXTRACT() replacement to use the
        internal format returned as an integer, for example DD HH24:MM:SS
        will be replaced with format; DDHH24MMSS::bigint, this depend of
        your apps usage.

  Control SQL Server export behavior
    DROP_ROWVERSION
        PostgreSQL has no equivalent to rowversion datatype and feature, if
        you want to remove these useless columns, enable this directive.
        Columns of datatype 'rowversion' or 'timestamp' will not be
        exported.

    CASE_INSENSITIVE_SEARCH
        Emulate the same behavior of MSSQL with case insensitive search. If
        the value is citext it will use the citext data type instead of
        char/varchar/text in tables DDL (Ora2Pg will add a CHECK constraint
        for columns with a precision). Instead of citext you can also set a
        collation name that will be used in the columns definitions. To
        disable case insensitive search set it to: none.

    SELECT_TOP
        Append a TOP N clause to the SELECT command used to extract the data
        from SQL Server. This is the equivalent to a WHERE ROWNUM < 1000
        clause for Oracle.

  Special options to handle character encoding
    NLS_LANG and NLS_NCHAR
        By default Ora2Pg will set NLS_LANG to AMERICAN_AMERICA.AL32UTF8 and
        NLS_NCHAR to AL32UTF8. It is not recommended to change those
        settings but in some case it could be useful. Using your own
        settings with those configuration directive will change the client
        encoding at Oracle side by setting the environment variables
        $ENV{NLS_LANG} and $ENV{NLS_NCHAR}.

    BINMODE
        By default Ora2Pg will force Perl to use utf8 I/O encoding. This is
        done through a call to the Perl pragma:

                use open ':utf8';

        You can override this encoding by using the BINMODE directive, for
        example you can set it to :locale to use your locale or iso-8859-7,
        it will respectively use

                use open ':locale';
                use open ':encoding(iso-8859-7)';

        If you have change the NLS_LANG in non UTF8 encoding, you might want
        to set this directive. See http://perldoc.perl.org/5.14.2/open.html
        for more information. Most of the time, leave this directive
        commented.

    CLIENT_ENCODING
        By default PostgreSQL client encoding is automatically set to UTF8
        to avoid encoding issue. If you have changed the value of NLS_LANG
        you might have to change the encoding of the PostgreSQL client.

        You can take a look at the PostgreSQL supported character sets here:
        http://www.postgresql.org/docs/9.0/static/multibyte.html

    FORCE_PLSQL_ENCODING
        To force utf8 encoding of the PL/SQL code exported, enable this
        directive. Could be helpful in some rare condition.

  PLSQL to PLPGSQL conversion
    Automatic code conversion from Oracle PLSQL to PostgreSQL PLPGSQL is a
    work in progress in Ora2Pg and surely you will always have manual work.
    The Perl code used for automatic conversion is all stored in a specific
    Perl Module named Ora2Pg/PLSQL.pm feel free to modify/add you own code
    and send me patches. The main work in on function, procedure, package
    and package body headers and parameters rewrite.

    PLSQL_PGSQL
        Enable/disable PLSQL to PLPGSQL conversion. Enabled by default.

    NULL_EQUAL_EMPTY
        Ora2Pg can replace all conditions with a test on NULL by a call to
        the coalesce() function to mimic the Oracle behavior where empty
        string are considered equal to NULL.

                (field1 IS NULL) is replaced by (coalesce(field1::text, '') = '')
                (field2 IS NOT NULL) is replaced by (field2 IS NOT NULL AND field2::text <> '')

        You might want this replacement to be sure that your application
        will have the same behavior but if you have control on you
        application a better way is to change it to transform empty string
        into NULL because PostgreSQL makes the difference.

    EMPTY_LOB_NULL
        Force empty_clob() and empty_blob() to be exported as NULL instead
        as empty string for the first one and '\x' for the second. If NULL
        is allowed in your column this might improve data export speed if
        you have lot of empty lob. Default is to preserve the exact data
        from Oracle.

    PACKAGE_AS_SCHEMA
        If you don't want to export package as schema but as simple
        functions you might also want to replace all call to
        package_name.function_name. If you disable the PACKAGE_AS_SCHEMA
        directive then Ora2Pg will replace all call to
        package_name.function_name() by package_name_function_name().
        Default is to use a schema to emulate package.

        The replacement will be done in all kind of DDL or code that is
        parsed by the PLSQL to PLPGSQL converter. PLSQL_PGSQL must be
        enabled or -p used in command line.

    REWRITE_OUTER_JOIN
        Enable this directive if the rewrite of Oracle native syntax (+) of
        OUTER JOIN is broken. This will force Ora2Pg to not rewrite such
        code, default is to try to rewrite simple form of right outer join
        for the moment.

    UUID_FUNCTION
        By default Ora2Pg will convert call to SYS_GUID() Oracle function
        with a call to uuid_generate_v4 from uuid-ossp extension. You can
        redefined it to use the gen_random_uuid function from pgcrypto
        extension by changing the function name. Default to
        uuid_generate_v4.

        Note that when a RAW(16) and RAW(32) columns is found or that the
        RAW column has "SYS_GUID()" as default value Ora2Pg will
        automatically translate the type of the column into uuid which might
        be the right translation in most of the case. In this case data will
        be automatically migrated as PostgreSQL uuid data type provided by
        the "uuid-ossp" extension.

    FUNCTION_STABLE
        By default Oracle functions are marked as STABLE as they can not
        modify data unless when used in PL/SQL with variable assignment or
        as conditional expression. You can force Ora2Pg to create these
        function as VOLATILE by disabling this configuration directive.

    COMMENT_COMMIT_ROLLBACK
        By default call to COMMIT/ROLLBACK are kept untouched by Ora2Pg to
        force the user to review the logic of the function. Once it is fixed
        in Oracle source code or you want to comment this calls enable the
        following directive.

    COMMENT_SAVEPOINT
        It is common to see SAVEPOINT call inside PL/SQL procedure together
        with a ROLLBACK TO savepoint_name. When COMMENT_COMMIT_ROLLBACK is
        enabled you may want to also comment SAVEPOINT calls, in this case
        enable it.

    STRING_CONSTANT_REGEXP
        Ora2Pg replace all string constant during the pl/sql to plpgsql
        translation, string constant are all text include between single
        quote. If you have some string placeholder used in dynamic call to
        queries you can set a list of regexp to be temporary replaced to not
        break the parser. For example:

                STRING_CONSTANT_REGEXP         <placeholder value=".*">

        The list of regexp must use the semi colon as separator.

    ALTERNATIVE_QUOTING_REGEXP
        To support the Alternative Quoting Mechanism ('Q' or 'q') for String
        Literals set the regexp with the text capture to use to extract the
        text part. For example with a variable declared as

                c_sample VARCHAR2(100 CHAR) := q'{This doesn't work.}';

        the regexp to use must be:

                ALTERNATIVE_QUOTING_REGEXP     q'{(.*)}'

        ora2pg will use the $$ delimiter, with the example the result will
        be:

                c_sample varchar(100) := $$This doesn't work.$$;

        The value of this configuration directive can be a list of regexp
        separated by a semi colon. The capture part (between parenthesis) is
        mandatory in each regexp if you want to restore the string constant.

    USE_ORAFCE
        If you want to use functions defined in the Orafce library and
        prevent Ora2Pg to translate call to these functions, enable this
        directive. The Orafce library can be found here:
        https://github.com/orafce/orafce

        By default Ora2pg rewrite add_month(), add_year(), date_trunc() and
        to_char() functions, but you may prefer to use the orafce version of
        these function that do not need any code transformation.

    AUTONOMOUS_TRANSACTION
        Enable translation of autonomous transactions into a wrapper
        function using dblink or pg_background extension. If you don't want
        to use this translation and just want the function to be exported as
        a normal one without the pragma call, disable this directive.

  Materialized view
    Materialized views are exported as snapshot "Snapshot Materialized
    Views" as PostgreSQL only supports full refresh.

    If you want to import the materialized views in PostgreSQL prior to 9.3
    you have to set configuration directive PG_SUPPORTS_MVIEW to 0. In this
    case Ora2Pg will export all materialized views as explain in this
    document:

            http://tech.jonathangardner.net/wiki/PostgreSQL/Materialized_Views.

    When exporting materialized view Ora2Pg will first add the SQL code to
    create the "materialized_views" table:

            CREATE TABLE materialized_views (
                    mview_name text NOT NULL PRIMARY KEY,
                    view_name text NOT NULL,
                    iname text,
                    last_refresh TIMESTAMP WITH TIME ZONE
            );

    all materialized views will have an entry in this table. It then adds
    the plpgsql code to create tree functions:

            create_materialized_view(text, text, text) used to create a materialized view
            drop_materialized_view(text) used to delete a materialized view
            refresh_full_materialized_view(text) used to refresh a view

    then it adds the SQL code to create the view and the materialized view:

            CREATE VIEW mviewname_mview AS
            SELECT ... FROM ...;

            SELECT create_materialized_view('mviewname','mviewname_mview', change with the name of the column to used for the index);

    The first argument is the name of the materialized view, the second the
    name of the view on which the materialized view is based and the third
    is the column name on which the index should be build (aka most of the
    time the primary key). This column is not automatically deduced so you
    need to replace its name.

    As said above Ora2Pg only supports snapshot materialized views so the
    table will be entirely refreshed by issuing first a truncate of the
    table and then by load again all data from the view:

             refresh_full_materialized_view('mviewname');

    To drop the materialized view you just have to call the
    drop_materialized_view() function with the name of the materialized view
    as parameter.

  Other configuration directives
    DEBUG
        Set it to 1 will enable verbose output.

    IMPORT
        You can define common Ora2Pg configuration directives into a single
        file that can be imported into other configuration files with the
        IMPORT configuration directive as follow:

                IMPORT  commonfile.conf

        will import all configuration directives defined into
        commonfile.conf into the current configuration file.

  Exporting views as PostgreSQL tables
    You can export any Oracle view as a PostgreSQL table simply by setting
    TYPE configuration option to TABLE to have the corresponding create
    table statement. Or use type COPY or INSERT to export the corresponding
    data. To allow that you have to specify your views in the VIEW_AS_TABLE
    configuration option.

    Then if Ora2Pg finds the view it will extract its schema (if TYPE=TABLE)
    into a PG create table form, then it will extract the data (if TYPE=COPY
    or INSERT) following the view schema.

    For example, with the following view:

            CREATE OR REPLACE VIEW product_prices (category_id, product_count, low_price, high_price) AS
            SELECT  category_id, COUNT(*) as product_count,
                MIN(list_price) as low_price,
                MAX(list_price) as high_price
             FROM   product_information
            GROUP BY category_id;

    Setting VIEW_AS_TABLE to product_prices and using export type TABLE,
    will force Ora2Pg to detect columns returned types and to generate a
    create table statement:

            CREATE TABLE product_prices (
                    category_id bigint,
                    product_count integer,
                    low_price numeric,
                    high_price numeric
            );

    Data will be loaded following the COPY or INSERT export type and the
    view declaration.

    You can use the ALLOW and EXCLUDE directive in addition to filter other
    objects to export.

  Export as Kettle transformation XML files
    The KETTLE export type is useful if you want to use Penthalo Data
    Integrator (Kettle) to import data to PostgreSQL. With this type of
    export Ora2Pg will generate one XML Kettle transformation files (.ktr)
    per table and add a line to manually execute the transformation in the
    output.sql file. For example:

            ora2pg -c ora2pg.conf -t KETTLE -j 12 -a MYTABLE -o load_mydata.sh

    will generate one file called 'HR.MYTABLE.ktr' and add a line to the
    output file (load_mydata.sh):

            #!/bin/sh

            KETTLE_TEMPLATE_PATH='.'

            JAVAMAXMEM=4096 ./pan.sh -file $KETTLE_TEMPLATE_PATH/HR.MYTABLE.ktr -level Detailed

    The -j 12 option will create a template with 12 processes to insert data
    into PostgreSQL. It is also possible to specify the number of parallel
    queries used to extract data from the Oracle with the -J command line
    option as follow:

            ora2pg -c ora2pg.conf -t KETTLE -J 4 -j 12 -a EMPLOYEES -o load_mydata.sh

    This is only possible if there is a unique key defined on a numeric
    column or that you have defined the technical key to used to split the
    query between cores in the DEFINED_PKEY configuration directive. For
    example:

            DEFINED_PK      EMPLOYEES:employee_id

    will force the number of Oracle connection copies to 4 and defined the
    SQL query as follow in the Kettle XML transformation file:

            <sql>SELECT * FROM HR.EMPLOYEES WHERE ABS(MOD(employee_id,${Internal.Step.Unique.Count}))=${Internal.Step.Unique.Number}</sql>

    The KETTLE export type requires that the Oracle and PostgreSQL DSN are
    defined. You can also activate the TRUNCATE_TABLE directive to force a
    truncation of the table before data import.

    The KETTLE export type is an original work of Marc Cousin.

  Migration cost assessment
    Estimating the cost of a migration process from Oracle to PostgreSQL is
    not easy. To obtain a good assessment of this migration cost, Ora2Pg
    will inspect all database objects, all functions and stored procedures
    to detect if there's still some objects and PL/SQL code that can not be
    automatically converted by Ora2Pg.

    Ora2Pg has a content analysis mode that inspect the Oracle database to
    generate a text report on what the Oracle database contains and what can
    not be exported.

    To activate the "analysis and report" mode, you have to use the export
    de type SHOW_REPORT like in the following command:

            ora2pg -t SHOW_REPORT

    Here is a sample report obtained with this command:

            --------------------------------------
            Ora2Pg: Oracle Database Content Report
            --------------------------------------
            Version Oracle Database 10g Enterprise Edition Release 10.2.0.1.0
            Schema  HR
            Size  880.00 MB
         
            --------------------------------------
            Object  Number  Invalid Comments
            --------------------------------------
            CLUSTER   2 0 Clusters are not supported and will not be exported.
            FUNCTION  40  0 Total size of function code: 81992.
            INDEX     435 0 232 index(es) are concerned by the export, others are automatically generated and will
                                            do so on PostgreSQL. 1 bitmap index(es). 230 b-tree index(es). 1 reversed b-tree index(es)
                                            Note that bitmap index(es) will be exported as b-tree index(es) if any. Cluster, domain,
                                            bitmap join and IOT indexes will not be exported at all. Reverse indexes are not exported
                                            too, you may use a trigram-based index (see pg_trgm) or a reverse() function based index
                                            and search. You may also use 'varchar_pattern_ops', 'text_pattern_ops' or 'bpchar_pattern_ops'
                                            operators in your indexes to improve search with the LIKE operator respectively into
                                            varchar, text or char columns.
            MATERIALIZED VIEW 1 0 All materialized view will be exported as snapshot materialized views, they
                                            are only updated when fully refreshed.
            PACKAGE BODY  2 1 Total size of package code: 20700.
            PROCEDURE 7 0 Total size of procedure code: 19198.
            SEQUENCE  160 0 Sequences are fully supported, but all call to sequence_name.NEXTVAL or sequence_name.CURRVAL
                                            will be transformed into NEXTVAL('sequence_name') or CURRVAL('sequence_name').
            TABLE     265 0 1 external table(s) will be exported as standard table. See EXTERNAL_TO_FDW configuration
                                            directive to export as file_fdw foreign tables or use COPY in your code if you just
                                            want to load data from external files. 2 binary columns. 4 unknown types.
            TABLE PARTITION 8 0 Partitions are exported using table inheritance and check constraint. 1 HASH partitions.
                                            2 LIST partitions. 6 RANGE partitions. Note that Hash partitions are not supported.
            TRIGGER   30  0 Total size of trigger code: 21677.
            TYPE      7 1 5 type(s) are concerned by the export, others are not supported. 2 Nested Tables.
                                            2 Object type. 1 Subtype. 1 Type Boby. 1 Type inherited. 1 Varrays. Note that Type
                                            inherited and Subtype are converted as table, type inheritance is not supported.
            TYPE BODY 0 3 Export of type with member method are not supported, they will not be exported.
            VIEW      7 0 Views are fully supported, but if you have updatable views you will need to use
                                            INSTEAD OF triggers.
            DATABASE LINK 1 0 Database links will not be exported. You may try the dblink perl contrib module or use
                                            the SQL/MED PostgreSQL features with the different Foreign Data Wrapper (FDW) extensions.
                                        
            Note: Invalid code will not be exported unless the EXPORT_INVALID configuration directive is activated.

    Once the database can be analysed, Ora2Pg, by his ability to convert SQL
    and PL/SQL code from Oracle syntax to PostgreSQL, can go further by
    estimating the code difficulties and estimate the time necessary to
    operate a full database migration.

    To estimate the migration cost in person-days, Ora2Pg allow you to use a
    configuration directive called ESTIMATE_COST that you can also enabled
    at command line:

            --estimate_cost

    This feature can only be used with the SHOW_REPORT, FUNCTION, PROCEDURE,
    PACKAGE and QUERY export type.

            ora2pg -t SHOW_REPORT  --estimate_cost

    The generated report is same as above but with a new 'Estimated cost'
    column as follow:

            --------------------------------------
            Ora2Pg: Oracle Database Content Report
            --------------------------------------
            Version Oracle Database 10g Express Edition Release 10.2.0.1.0
            Schema  HR
            Size  890.00 MB
         
            --------------------------------------
            Object  Number  Invalid Estimated cost  Comments
            --------------------------------------
            DATABASE LINK  3 0 9 Database links will be exported as SQL/MED PostgreSQL's Foreign Data Wrapper (FDW) extensions
                                            using oracle_fdw.
            FUNCTION  2 0 7 Total size of function code: 369 bytes. HIGH_SALARY: 2, VALIDATE_SSN: 3.
            INDEX 21  0 11  11 index(es) are concerned by the export, others are automatically generated and will do so
                                            on PostgreSQL. 11 b-tree index(es). Note that bitmap index(es) will be exported as b-tree
                                            index(es) if any. Cluster, domain, bitmap join and IOT indexes will not be exported at all.
                                            Reverse indexes are not exported too, you may use a trigram-based index (see pg_trgm) or a
                                            reverse() function based index and search. You may also use 'varchar_pattern_ops', 'text_pattern_ops'
                                            or 'bpchar_pattern_ops' operators in your indexes to improve search with the LIKE operator
                                            respectively into varchar, text or char columns.
            JOB 0 0 0 Job are not exported. You may set external cron job with them.
            MATERIALIZED VIEW 1 0 3 All materialized view will be exported as snapshot materialized views, they
                                                    are only updated when fully refreshed.
            PACKAGE BODY  0 2 54  Total size of package code: 2487 bytes. Number of procedures and functions found
                                                    inside those packages: 7. two_proc.get_table: 10, emp_mgmt.create_dept: 4,
                                                    emp_mgmt.hire: 13, emp_mgmt.increase_comm: 4, emp_mgmt.increase_sal: 4,
                                                    emp_mgmt.remove_dept: 3, emp_mgmt.remove_emp: 2.
            PROCEDURE 4 0 39  Total size of procedure code: 2436 bytes. TEST_COMMENTAIRE: 2, SECURE_DML: 3,
                                                    PHD_GET_TABLE: 24, ADD_JOB_HISTORY: 6.
            SEQUENCE  3 0 0 Sequences are fully supported, but all call to sequence_name.NEXTVAL or sequence_name.CURRVAL
                                                    will be transformed into NEXTVAL('sequence_name') or CURRVAL('sequence_name').
            SYNONYM   3 0 4 SYNONYMs will be exported as views. SYNONYMs do not exists with PostgreSQL but a common workaround
                                                    is to use views or set the PostgreSQL search_path in your session to access
                                                    object outside the current schema.
                                                    user1.emp_details_view_v is an alias to hr.emp_details_view.
                                                    user1.emp_table is an alias to hr.employees@other_server.
                                                    user1.offices is an alias to hr.locations.
            TABLE 17  0 8.5 1 external table(s) will be exported as standard table. See EXTERNAL_TO_FDW configuration
                                            directive to export as file_fdw foreign tables or use COPY in your code if you just want to
                                            load data from external files. 2 binary columns. 4 unknown types.
            TRIGGER 1 1 4 Total size of trigger code: 123 bytes. UPDATE_JOB_HISTORY: 2.
            TYPE  7 1 5 5 type(s) are concerned by the export, others are not supported. 2 Nested Tables. 2 Object type.
                                            1 Subtype. 1 Type Boby. 1 Type inherited. 1 Varrays. Note that Type inherited and Subtype are
                                            converted as table, type inheritance is not supported.
            TYPE BODY 0 3 30  Export of type with member method are not supported, they will not be exported.
            VIEW  1 1 1 Views are fully supported, but if you have updatable views you will need to use INSTEAD OF triggers.
            --------------------------------------
            Total 65  8 162.5 162.5 cost migration units means approximatively 2 man day(s).

    The last line shows the total estimated migration cost in person-days
    following the number of migration units estimated for each object. This
    migration unit represent around five minutes for a PostgreSQL expert. If
    this is your first migration you can get it higher with the
    configuration directive COST_UNIT_VALUE or the --cost_unit_value command
    line option:

            ora2pg -t SHOW_REPORT  --estimate_cost --cost_unit_value 10

    Ora2Pg is also able to give you a migration difficulty level assessment,
    here a sample:

    Migration level: B-5

        Migration levels:
            A - Migration that might be run automatically
            B - Migration with code rewrite and a person-days cost up to 5 days
            C - Migration with code rewrite and a person-days cost above 5 days
        Technical levels:
            1 = trivial: no stored functions and no triggers
            2 = easy: no stored functions but with triggers, no manual rewriting
            3 = simple: stored functions and/or triggers, no manual rewriting
            4 = manual: no stored functions but with triggers or views with code rewriting
            5 = difficult: stored functions and/or triggers with code rewriting

    This assessment consist in a letter A or B to specify if the migration
    needs manual rewriting or not. And a number from 1 up to 5 to give you a
    technical difficulty level. You have an additional option
    --human_days_limit to specify the number of person-days limit where the
    migration level should be set to C to indicate that it need a huge
    amount of work and a full project management with migration support.
    Default is 10 person-days. You can use the configuration directive
    HUMAN_DAYS_LIMIT to change this default value permanently.

    This feature has been developed to help you or your boss to decide which
    database to migrate first and the team that must be mobilized to operate
    the migration.

  Global Oracle and MySQL migration assessment
    Ora2Pg come with a script ora2pg_scanner that can be used when you have
    a huge number of instances and schema to scan for migration assessment.

    Usage: ora2pg_scanner -l CSVFILE [-o OUTDIR]

       -b | --binpath DIR: full path to directory where the ora2pg binary stays.
                    Might be useful only on Windows OS.
       -c | --config FILE: set custom configuration file to use otherwise ora2pg
                    will use the default: /etc/ora2pg/ora2pg.conf.
       -l | --list FILE : CSV file containing a list of databases to scan with
                    all required information. The first line of the file
                    can contain the following header that describes the
                    format that must be used:

                    "type","schema/database","dsn","user","password"

       -o | --outdir DIR : (optional) by default all reports will be dumped to a
                    directory named 'output', it will be created automatically.
                    If you want to change the name of this directory, set the name
                    at second argument.

       -t | --test : just try all connections by retrieving the required schema
                     or database name. Useful to validate your CSV list file.
       -u | --unit MIN : redefine globally the migration cost unit value in minutes.
                     Default is taken from the ora2pg.conf (default 5 minutes).

       Here is a full example of a CSV databases list file:

            "type","schema/database","dsn","user","password"
            "MYSQL","sakila","dbi:mysql:host=192.168.1.10;database=sakila;port=3306","root","secret"
            "ORACLE","HR","dbi:Oracle:host=192.168.1.10;sid=XE;port=1521","system","manager"
            "MSSQL","HR","dbi:ODBC:driver=msodbcsql18;server=srv.database.windows.net;database=testdb","system","manager"

       The CSV field separator must be a comma.

       Note that if you want to scan all schemas from an Oracle instance you just
       have to leave the schema field empty, Ora2Pg will automatically detect all
       available schemas and generate a report for each one. Of course you need to
       use a connection user with enough privileges to be able to scan all schemas.
       For example:

            "ORACLE","","dbi:Oracle:host=192.168.1.10;sid=XE;port=1521","system","manager"
            "MSSQL","","dbi:ODBC:driver=msodbcsql18;server=srv.database.windows.net;database=testdb","usrname","passwd"

       will generate a report for all schema in the XE instance. Note that in this
       case the SCHEMA directive in ora2pg.conf must not be set.

    It will generate a CSV file with the assessment result, one line per
    schema or database and a detailed HTML report for each database scanned.

    Hint: Use the -t | --test option before to test all your connections in
    your CSV file.

    For Windows users you must use the -b command line option to set the
    directory where ora2pg_scanner stays otherwise the ora2pg command calls
    will fail.

    In the migration assessment details about functions Ora2Pg always
    include per default 2 migration units for TEST and 1 unit for SIZE per
    1000 characters in the code. This mean that by default it will add 15
    minutes in the migration assessment per function. Obviously if you have
    unitary tests or very simple functions this will not represent the real
    migration time.

  Migration assessment method
    Migration unit scores given to each type of Oracle database object are
    defined in the Perl library lib/Ora2Pg/PLSQL.pm in the %OBJECT_SCORE
    variable definition.

    The number of PL/SQL lines associated to a migration unit is also
    defined in this file in the $SIZE_SCORE variable value.

    The number of migration units associated to each PL/SQL code
    difficulties can be found in the same Perl library lib/Ora2Pg/PLSQL.pm
    in the hash %UNCOVERED_SCORE initialization.

    This assessment method is a work in progress so I'm expecting feedbacks
    on migration experiences to polish the scores/units attributed in those
    variables.

  Improving indexes and constraints creation speed
    Using the LOAD export type and a file containing SQL orders to perform,
    it is possible to dispatch those orders over multiple PostgreSQL
    connections. To be able to use this feature, the PG_DSN, PG_USER and
    PG_PWD must be set. Then:

            ora2pg -t LOAD -c config/ora2pg.conf -i schema/tables/INDEXES_table.sql -j 4

    will dispatch indexes creation over 4 simultaneous PostgreSQL
    connections.

    This will considerably accelerate this part of the migration process
    with huge data size.

  Exporting LONG RAW
    If you still have columns defined as LONG RAW, Ora2Pg will not be able
    to export these kind of data. The OCI library fail to export them and
    always return the same first record. To be able to export the data you
    need to transform the field as BLOB by creating a temporary table before
    migrating data. For example, the Oracle table:

            SQL> DESC TEST_LONGRAW
             Name                 NULL ?   Type
             -------------------- -------- ----------------------------
             ID                            NUMBER
             C1                            LONG RAW

    need to be "translated" into a table using BLOB as follow:

            CREATE TABLE test_blob (id NUMBER, c1 BLOB);

    And then copy the data with the following INSERT query:

            INSERT INTO test_blob SELECT id, to_lob(c1) FROM test_longraw;

    Then you just have to exclude the original table from the export (see
    EXCLUDE directive) and to renamed the new temporary table on the fly
    using the REPLACE_TABLES configuration directive.

  Global variables
    Oracle allow the use of global variables defined in packages. Ora2Pg
    will export these variables for PostgreSQL as user defined custom
    variables available in a session. Oracle variables assignment are
    exported as call to:

        PERFORM set_config('pkgname.varname', value, false);

    Use of these variables in the code is replaced by:

        current_setting('pkgname.varname')::global_variables_type;

    where global_variables_type is the type of the variable extracted from
    the package definition.

    If the variable is a constant or have a default value assigned at
    declaration, Ora2Pg will create a file global_variables.conf with the
    definition to include in the postgresql.conf file so that their values
    will already be set at database connection. Note that the value can
    always modified by the user so you can not have exactly a constant.

  Hints
    Converting your queries with Oracle style outer join (+) syntax to ANSI
    standard SQL at the Oracle side can save you lot of time for the
    migration. You can use TOAD Query Builder can re-write these using the
    proper ANSI syntax, see:
    http://www.toadworld.com/products/toad-for-oracle/f/10/t/9518.aspx

    There's also an alternative with SQL Developer Data Modeler, see
    http://www.thatjeffsmith.com/archive/2012/01/sql-developer-data-modeler-
    quick-tip-use-oracle-join-syntax-or-ansi/

    Toad is also able to rewrite the native Oracle DECODE() syntax into ANSI
    standard SQL CASE statement. You can find some slide about this in a
    presentation given at PgConf.RU:
    http://ora2pg.darold.net/slides/ora2pg_the_hard_way.pdf

  Test the migration
    The type of action called TEST allow you to check that all objects from
    Oracle database have been created under PostgreSQL. Of course PG_DSN
    must be set to be able to check PostgreSQL side.

    Note that this feature respect the schema name limitation if
    EXPORT_SCHEMA and SCHEMA or PG_SCHEMA are defined. If only EXPORT_SCHEMA
    is set all schemes from Oracle and PostgreSQL are scanned. You can
    filter to a single schema using SCHEMA and/or PG_SCHEMA but you can not
    filter on a list of schema. To test a list of schema you will have to
    repeat the calls to Ora2Pg by specifying a single schema each time.

    For example command:

            ora2pg -t TEST -c config/ora2pg.conf > migration_diff.txt

    Will create a file containing the report of all object and row count on
    both side, Oracle and PostgreSQL, with an error section giving you the
    detail of the differences for each kind of object. Here is a sample
    result:

            [TEST INDEXES COUNT]
            ORACLEDB:DEPARTMENTS:2
            POSTGRES:departments:1
            ORACLEDB:EMPLOYEES:6
            POSTGRES:employees:6
            [ERRORS INDEXES COUNT]
            Table departments doesn't have the same number of indexes in Oracle (2) and in PostgreSQL (1).

            [TEST UNIQUE CONSTRAINTS COUNT]
            ORACLEDB:DEPARTMENTS:1
            POSTGRES:departments:1
            ORACLEDB:EMPLOYEES:1
            POSTGRES:employees:1
            [ERRORS UNIQUE CONSTRAINTS COUNT]
            OK, Oracle and PostgreSQL have the same number of unique constraints.

            [TEST PRIMARY KEYS COUNT]
            ORACLEDB:DEPARTMENTS:1
            POSTGRES:departments:1
            ORACLEDB:EMPLOYEES:1
            POSTGRES:employees:1
            [ERRORS PRIMARY KEYS COUNT]
            OK, Oracle and PostgreSQL have the same number of primary keys.

            [TEST CHECK CONSTRAINTS COUNT]
            ORACLEDB:DEPARTMENTS:1
            POSTGRES:departments:1
            ORACLEDB:EMPLOYEES:1
            POSTGRES:employees:1
            [ERRORS CHECK CONSTRAINTS COUNT]
            OK, Oracle and PostgreSQL have the same number of check constraints.

            [TEST NOT NULL CONSTRAINTS COUNT]
            ORACLEDB:DEPARTMENTS:1
            POSTGRES:departments:1
            ORACLEDB:EMPLOYEES:1
            POSTGRES:employees:1
            [ERRORS NOT NULL CONSTRAINTS COUNT]
            OK, Oracle and PostgreSQL have the same number of not null constraints.

            [TEST COLUMN DEFAULT VALUE COUNT]
            ORACLEDB:DEPARTMENTS:1
            POSTGRES:departments:1
            ORACLEDB:EMPLOYEES:1
            POSTGRES:employees:1
            [ERRORS COLUMN DEFAULT VALUE COUNT]
            OK, Oracle and PostgreSQL have the same number of column default value.

            [TEST IDENTITY COLUMN COUNT]
            ORACLEDB:DEPARTMENTS:1
            POSTGRES:departments:1
            ORACLEDB:EMPLOYEES:0
            POSTGRES:employees:0
            [ERRORS IDENTITY COLUMN COUNT]
            OK, Oracle and PostgreSQL have the same number of identity column.

            [TEST FOREIGN KEYS COUNT]
            ORACLEDB:DEPARTMENTS:0
            POSTGRES:departments:0
            ORACLEDB:EMPLOYEES:1
            POSTGRES:employees:1
            [ERRORS FOREIGN KEYS COUNT]
            OK, Oracle and PostgreSQL have the same number of foreign keys.

            [TEST TABLE COUNT]
            ORACLEDB:TABLE:2
            POSTGRES:TABLE:2
            [ERRORS TABLE COUNT]
            OK, Oracle and PostgreSQL have the same number of TABLE.

            [TEST TABLE TRIGGERS COUNT]
            ORACLEDB:DEPARTMENTS:0
            POSTGRES:departments:0
            ORACLEDB:EMPLOYEES:1
            POSTGRES:employees:1
            [ERRORS TABLE TRIGGERS COUNT]
            OK, Oracle and PostgreSQL have the same number of table triggers.

            [TEST TRIGGER COUNT]
            ORACLEDB:TRIGGER:2
            POSTGRES:TRIGGER:2
            [ERRORS TRIGGER COUNT]
            OK, Oracle and PostgreSQL have the same number of TRIGGER.

            [TEST VIEW COUNT]
            ORACLEDB:VIEW:1
            POSTGRES:VIEW:1
            [ERRORS VIEW COUNT]
            OK, Oracle and PostgreSQL have the same number of VIEW.

            [TEST MVIEW COUNT]
            ORACLEDB:MVIEW:0
            POSTGRES:MVIEW:0
            [ERRORS MVIEW COUNT]
            OK, Oracle and PostgreSQL have the same number of MVIEW.

            [TEST SEQUENCE COUNT]
            ORACLEDB:SEQUENCE:1
            POSTGRES:SEQUENCE:0
            [ERRORS SEQUENCE COUNT]
            SEQUENCE does not have the same count in Oracle (1) and in PostgreSQL (0).

            [TEST TYPE COUNT]
            ORACLEDB:TYPE:1
            POSTGRES:TYPE:0
            [ERRORS TYPE COUNT]
            TYPE does not have the same count in Oracle (1) and in PostgreSQL (0).

            [TEST FDW COUNT]
            ORACLEDB:FDW:0
            POSTGRES:FDW:0
            [ERRORS FDW COUNT]
            OK, Oracle and PostgreSQL have the same number of FDW.

            [TEST FUNCTION COUNT]
            ORACLEDB:FUNCTION:3
            POSTGRES:FUNCTION:3
            [ERRORS FUNCTION COUNT]
            OK, Oracle and PostgreSQL have the same number of functions.

            [TEST SEQUENCE VALUES]
            ORACLEDB:EMPLOYEES_NUM_SEQ:1285
            POSTGRES:employees_num_seq:1285
            [ERRORS SEQUENCE VALUES COUNT]
            OK, Oracle and PostgreSQL have the same values for sequences

            [TEST ROWS COUNT]
            ORACLEDB:DEPARTMENTS:27
            POSTGRES:departments:27
            ORACLEDB:EMPLOYEES:854
            POSTGRES:employees:854
            [ERRORS ROWS COUNT]
            OK, Oracle and PostgreSQL have the same number of rows.

  Data validation
    Data validation consists in comparing data retrieved from a foreign
    table pointing to the source Oracle table and a local PostgreSQL table
    resulting from the data export.

    To run data validation you can use a direct connection like any other
    Ora2Pg action but you can also use the oracle_fdw, mysql_fdw ior tds_fdw
    extension provided that FDW_SERVER and PG_DSN configuration directives
    are set.

    By default Ora2Pg will extract the 10000 first rows from both side, you
    can change this value using directive DATA_VALIDATION_ROWS. When it is
    set to zero all rows of the tables will be compared.

    Data validation requires that the table has a primary key or unique
    index and that the key columns is not a LOB. Rows will be sorted using
    this unique key. Due to differences in sort behavior between Oracle and
    PostgreSQL, if the collation of unique key columns in PostgreSQL is not
    'C', the sort order can be different compared to Oracle. In this case
    the data validation will fail.

    Data validation must be done before any data is modified.

    Ora2Pg will stop comparing two tables after DATA_VALIDATION_ROWS is
    reached or that 10 errors has been encountered, result is dumped in a
    file named "data_validation.log" written in the current directory by
    default. The number of error before stopping the diff between rows can
    be controlled using the configuration directive DATA_VALIDATION_ERROR.
    All rows in errors are printed to the output file for your analyze.

    It is possible to parallelize data validation by using -P option or the
    corresponding configuration directive PARALLEL_TABLES in ora2pg.conf.

  Use of System Change Number (SCN)
    Ora2Pg is able to export data as of a specific SCN. You can set it at
    command line using the -S or --scn option. You can give a specific SCN
    or if you want to use the current SCN at first connection time set the
    value to 'current'. In this last case the connection user has the
    "SELECT ANY DICTIONARY" or the "SELECT_CATALOG_ROLE" role, the current
    SCN is looked at the v$database view.

    Example of use:

        ora2pg -c ora2pg.conf -t COPY --scn 16605281

    This adds the following clause to the query used to retrieve data for
    example:

        AS OF SCN 16605281

    You can also use th --scn option to use the Oracle flashback capabality
    by specifying a timestamp expression instead of a SCN. For example:

        ora2pg -c ora2pg.conf -t COPY --scn "TO_TIMESTAMP('2021-12-01 00:00:00', 'YYYY-MM-DD HH:MI:SS')"

    This will add the following clause to the query used to retrieve data:

        AS OF TIMESTAMP TO_TIMESTAMP('2021-12-01 00:00:00', 'YYYY-MM-DD HH:MI:SS')

    or for example to only retrive yesterday's data:

        ora2pg -c ora2pg.conf -t COPY --scn "SYSDATE - 1"

  Change Data Capture (CDC)
    Ora2Pg do not have such feature which allow to import data and to only
    apply changes after the first import. But you can use the --cdc_ready
    option to export data with registration of the SCN at the time of the
    table export. All SCN per tables are written to a file named
    TABLES_SCN.log by default, it can be changed using -C | --cdc_file
    option.

    These SCN registered per table during COPY or INSERT export can be used
    with a CDC tool. The format of the file is tablename:SCN per line.

  Importing BLOB as large objects
    By default Ora2Pg imports Oracle BLOB as bytea, the destination column
    is created using the bytea data type. If you want to use large object
    instead of bytea, just add the --blob_to_lo option to the ora2pg
    command. It will create the destination column as data type Oid and will
    save the BLOB as a large object using the lo_from_bytea() function. The
    Oid returned by the call to lo_from_bytea() is inserted in the
    destination column instead of a bytea. Because of the use of the
    function this option can only be used with actions SHOW_COLUMN, TABLE
    and INSERT. Action COPY is not allowed.

    If you want to use COPY or have huge size BLOB ( > 1GB) than can not be
    imported using lo_from_bytea() you can add option --lo_import to the
    ora2pg command. This will allow to import data in two passes.

    1) Export data using COPY or INSERT will set the Oid destination column
    for BLOB to value 0 and save the BLOB value into a dedicated file. It
    will also create a Shell script to import the BLOB files into the
    database using psql command \lo_import and to update the table Oid
    column to the returned large object Oid. The script is named
    lo_import-TABLENAME.sh

    2) Execute all scripts lo_import-TABLENAME.sh after setting the
    environment variables PGDATABASE and optionally PGHOST, PGPORT, PGUSER,
    etc. if they do not correspond to the default values for libpq.

    You might also execute manually a VACUUM FULL on the table to remove the
    bloat created by the table update.

    Limitation: the table must have a primary key, it is used to set the
    WHERE clause to update the Oid column after the large object import.
    Importing BLOB using this second method (--lo_import) is very slow so it
    should be reserved to rows where the BLOB > 1GB for all other rows use
    the option --blob_to_lo. To filter the rows you can use the WHERE
    configuration directive in ora2pg.conf.

SUPPORT
  Author / Maintainer
    Gilles Darold <gilles AT darold DOT net>

    Please report any bugs, patches, help, etc. to <gilles AT darold DOT
    net>.

  Feature request
    If you need new features let me know at <gilles AT darold DOT net>. This
    help a lot to develop a better/useful tool.

  How to contribute ?
    Any contribution to build a better tool is welcome, you just have to
    send me your ideas, features request or patches and there will be
    applied.

LICENSE
    Copyright (c) 2000-2024 Gilles Darold - All rights reserved.

            This program is free software: you can redistribute it and/or modify
            it under the terms of the GNU General Public License as published by
            the Free Software Foundation, either version 3 of the License, or
            any later version.

            This program is distributed in the hope that it will be useful,
            but WITHOUT ANY WARRANTY; without even the implied warranty of
            MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
            GNU General Public License for more details.

            You should have received a copy of the GNU General Public License
            along with this program.  If not, see < http://www.gnu.org/licenses/ >.

ACKNOWLEDGEMENT
    I must thanks a lot all the great contributors, see changelog for all
    acknowledgments.
GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS

简介

Ora2pg可以用来将Oracle数据库转换成PostgreSQL,主要功能: - Export full database schema (tables, views, sequ 展开 收起
Perl
GPL-3.0
取消

发行版

暂无发行版

贡献者

全部

近期动态

加载更多
不能加载更多了
马建仓 AI 助手
尝试更多
代码解读
代码找茬
代码优化
Perl
1
https://gitee.com/breakFree/Ora2pg.git
git@gitee.com:breakFree/Ora2pg.git
breakFree
Ora2pg
Ora2pg
master

搜索帮助