PERFORMANCE BOOSTERS FOR ETL
PROGRAMS:
Challenges
ETL
processes can involve considerable complexity, and significant operational
problems can occur with improperly designed ETL systems.
The range
of data values or data quality in an operational system may exceed the
expectations of designers at the time validation and transformation rules are
specified. Data profiling of a
source during data analysis can identify the data conditions that will need to
be managed by transform rules specifications. This will lead to an amendment of
validation rules explicitly and implicitly implemented in the ETL process.
Data
warehouses are typically assembled from a variety of data sources with
different formats and purposes. As such, ETL is a key process to bring all the
data together in a standard, homogeneous environment.
Design
analysts should establish the scalability of an
ETL system across the lifetime of its usage. This includes understanding the
volumes of data that will have to be processed within service
level agreements. The time available to extract from source systems
may change, which may mean the same amount of data may have to be processed in
less time. Some ETL systems have to scale to process terabytes of data to
update data warehouses with tens of terabytes of data. Increasing volumes of
data may require designs that can scale from daily batch to
multiple-day microbatch to integration with message queues or
real-time change-data capture for continuous transformation and update
Performance
ETL
vendors benchmark their record-systems at multiple TB (terabytes) per hour (or
~1 GB per second) using powerful servers with multiple CPUs, multiple hard
drives, multiple gigabit-network connections, and lots of memory. The fastest
ETL record is currently held by Syncsort,[1] Vertica and HP at 5.4TB in under an
hour which is more than twice as fast as the earlier record held by Microsoft
and Unisys.
In real
life, the slowest part of an ETL process usually occurs in the database load
phase. Databases may perform slowly because they have to take care of
concurrency, integrity maintenance, and indices. Thus, for better performance,
it may make sense to employ:
- Direct Path Extract method or bulk unload whenever is possible (instead of querying the database) to reduce the load on source system while getting high speed extract
- most of the transformation processing outside of the database
- bulk load operations whenever possible.
Still,
even using bulk operations, database access is usually the bottleneck in the
ETL process. Some common methods used to increase performance are:
- Partition tables (and indices). Try to keep partitions similar in size (watch for null values which can skew the partitioning).
- Do all validation in the ETL layer before the load. Disable integrity checking (disable constraint ...) in the target database tables during the load.
- Disable triggers (disable trigger ...) in the target database tables during the load. Simulate their effect as a separate step.
- Generate IDs in the ETL layer (not in the database).
- Drop the indices (on a table or partition) before the load - and recreate them after the load (SQL: drop index ...; create index ...).
- Use parallel bulk load when possible — works well when the table is partitioned or there are no indices. Note: attempt to do parallel loads into the same table (partition) usually causes locks — if not on the data rows, then on indices.
- If a requirement exists to do insertions, updates, or deletions, find out which rows should be processed in which way in the ETL layer, and then process these three operations in the database separately. You often can do bulk load for inserts, but updates and deletes commonly go through an API (using SQL).
Whether
to do certain operations in the database or outside may involve a trade-off.
For example, removing duplicates using distinct may be
slow in the database; thus, it makes sense to do it outside. On the other side,
if using distinct will significantly (x100) decrease the number of
rows to be extracted, then it makes sense to remove duplications as early as
possible in the database before unloading data.
A common
source of problems in ETL is a big number of dependencies among ETL jobs. For
example, job "B" cannot start while job "A" is not
finished. You can usually achieve better performance by visualizing all
processes on a graph, and trying to reduce the graph making maximum use of parallelism, and
making "chains" of consecutive processing as short as possible.
Again, partitioning of big tables and of their indices can really help.
Another
common issue occurs when the data is spread between several databases, and processing
is done in those databases sequentially. Sometimes database replication may be
involved as a method of copying data between databases - and this can
significantly slow down the whole process. The common solution is to reduce the
processing graph to only three layers:
- Sources
- Central ETL layer
- Targets
This
allows processing to take maximum advantage of parallel processing. For
example, if you need to load data into two databases, you can run the loads in
parallel (instead of loading into 1st - and then replicating into the 2nd).
Of
course, sometimes processing must take place sequentially. For example, you
usually need to get dimensional (reference) data before you can get and
validate the rows for main "fact"
tables.
Parallel processing
A recent development in ETL software is the implementation
of parallel
processing. This has enabled a number of methods to improve
overall performance of ETL processes when dealing with large volumes of data.
ETL
applications implement three main types of parallelism:
- Data: By splitting a single sequential file into smaller data files to provide parallel access.
- Pipeline: Allowing the simultaneous running of several components on the same data stream. For example: looking up a value on record 1 at the same time as adding two fields on record 2.
- Component: The simultaneous running of multiple processes on different data streams in the same job, for example, sorting one input file while removing duplicates on another file.
All three
types of parallelism usually operate combined in a single job.
An
additional difficulty comes with making sure that the data being uploaded is
relatively consistent. Because multiple source databases may have different
update cycles (some may be updated every few minutes, while others may take
days or weeks), an ETL system may be required to hold back certain data until
all sources are synchronized. Likewise, where a warehouse may have to be
reconciled to the contents in a source system or with the general ledger,
establishing synchronization and reconciliation points becomes necessary.
Source: Wikipedia.