Just heard someone at IDUG EMEA 2016 asking about how to setup DB2 LUW shadow tables.
This will depend upon DB2 Version,you also need a compatible CDC (Infosphere Replication) version,check via Finding System Requirements for IBM Software Products.
The steps are at http://www.smooth1.co.uk/db2/cancun.html#1.1
NOTE: This was written when the feature first came out DB2 LUW 10.5 FP4.
CDC Components to download were:
- InfoSphere CDC for DB2 for LUW
- InfoSphere CDC Access Server
- InfoSphere CDC Management Console
db2look -m -r
The “-r” option is used to generate db2look output without runstats command output. However, current db2look code cannot reset SYSSTAT.COLUMNS and SYSSTAT.INDEXES when -r is applied. As a result, SQL1227N sometimes occur when applying the mimic when COLCARD etc, and CARD for the table is inconsistent. In db2look up to version in V10.5 please use db2look -m without -r option or insert resetting SYSSTAT.COLUMNS and SYSSTAT.INDEXES manually in db2look output with “-m” and “-r”. This limitation will be changed in releases later than V10.5. In most cases it should not cause SQL1227N error, but still the error can occur when the statistics on the source system is inconsistent. Here are examples:
Why COLCARD is sometimes bigger than CARD after RUNSTATS?
Runstats may update unexpected HIGH2KEY and LOW2KEY statistic values which may cause SQL1227N
As documented in above technotes, you can bypass the error by setting following undocumented registry variable even if the statistics are inconsistent:
recycle the instance
This registry variable can be used from 9.7FP9/10.1FP4/10.5.
DB2 10.5 Fixpack 6 is out
“Fix Pack 6 does not contain new enhancements, and there were no changes to the existing functionality. This fix pack and future fix packs will deliver only APAR and security vulnerability fixes.”
Database product installation FAQ update to include Cancun http://www.smooth1.co.uk/installs/dbinstalls.html.
Also Cancun New Features document WIP http://www.smooth1.co.uk/db2/cancun.html
1 known issue – db2 update db cfg error http://www-01.ibm.com/support/docview.wss?uid=swg21686677&myns=swgimgmt&mynp=OCSSEPGG&mync=E
Highlights from Chris Eaton’s presentation at http://www.idug-db2.com/?commid=128257
DB2 BLU, not all active data needs to fit in memory to get the analysis done.
This means a smaller machine can be used which does not have so many cores and hence less database licenses are needed.
BLU cheif architect Sam Lightstone “Memory is the new disk”.
Pack data into L1/L2/L3 cache to avoid going to main memory if not required.
Data is packed into register length encodings so avoid register manipulations when fetching from main memory.
Work on data whilst it is compressed in memory, equality,range,joins,aggregation is done on compressed data.
Order of magnitude gains over products which need to decompress to process the data.
BLU will use all cores and 90% core utilization or more when queries are running even 32 cores.
Data skipping tells you which blocks do NOT contain data, min/max values per chunk of data.
Data skipping in unique in BLU – even skips main memory access vs Exadata storage indexes which just avoid ios.
Shadow tables – column based copy of row based tables kept in sync by CDC Replication.
Reports can be run on OLTP system adhoc ahdoc to get current results with no application changes.
Analytics are pointed at the row based table but optimizer will used the BLU column based copy.
No analytical indexes needed on BLU tables.
No great slowdown to OLTP as does not need to update analytical indexes.
No replication needed to another reporting system.
When running SAP BW 7.4 and DB2 Cancun more objects can be used with BLU.
Oracle only has columar version in-memory hence on startup the columar version needs to be rebuilt.
Also the DBA needs to decide which tables/partitions/columns are hot and need to be in-memory vs DB2 optimizer deciding which items are hot. On Oracle the DBA needs to decide which level of compression to use and as Oracle needs to decompress the data to work on it as more compression is used the overhead goes up.
Chris Eaton said Oracle default compression is “memcompress for query low” which is 2x compression vs BLU 10x.compression.
Oracle is choose which tables/partitions/columns vs DB2 BLU create shadow table on disk with 10x compression.
Oracle in-memory is an addon on top of Enterprise Edition, will also probably need partitioning option as well.
BLU vs Hana, tables in Hana are row or column not both like shadow tables.Hana column store is only columar no shadow tables. There is a row based input buffer, inserts/update/deletes go into a row-based in-memory L1 differential cache, then a columnar L2 cache, then from there to the main store.
DB2 can store as row or column on disk and can do shadow tables as well.
HANA uses multiple compression techniques – dictionary encoding, run length encoding, cluster encoding. Equality comparisons can be done on compressed data.
HANA requires all active data to fit in memory, an entire column from a single partition must go into memory, otherwise swapping.
HANA is Linux only, no AIX, at Insight there will be more annoucements about DB2 BLU platforms.
HANA reads go to column store + have to including differential store.
For HANA differential store merges require a checkpoint, hence extra merging/checkpointing vs BLU going directly into the columnar store, more memory/cpu needed.
->L1 delta (row) -> log + merge-> L2 delta (column) -> merge into main store
DB2 Purescale supports rolling updates, oracle patchsets are not rolling upgradable and do not support standby first. Need PSU/CPU for rolling/standby first.
Purescale with sockets is supported with VMWare/KVM,VMWare any,KVM Redhat 6.2 higher.
DB2 support reporting bugs in VMWare, Oracle require you reproduce the bug in the native OS rather than VMWare to get support. DB2 supported sub-capacity license, only the cores you use.
Next Tech Talk is what is “What’s new in DB2 10.5 Cancun” from Sam Lightstone and Christian Garcia-Arellano 20 November
IBM Recommendation for BLU set DB2_WORKLOAD=ANALYTICS and use autoconfigure with 80% of machine memory for a dedicated server. 1/2 memory of that for bufferpool, 1/2 or more for sorting depending on workload. sort memory is not dynamic. With shadow tables, the server is still OLTP so do not set DB2_WORKLOAD=ANALYTICS but use a seperate bufferpool for columnar shadow tables.
Column-organized tables do not need reorg except to free unused extents.
Shadow table – knowledge centre for alter/drop. If table gets dropped the shadow table is dropped.
BLU/Shadow tables are in single partition instances only.
CDC scrapes log buffer and replicates to shadow tables, 4-7% overhead – equal/less than updating analytical indexes. DBA needs to create subscription+target mapping + replication monitoring.
OLTP quick queries gets higher priority over BLU queries with Workload Management,BLU automaically turns on Workload Management.
When transporting schemas via a restore if the instance owner does not have permission to bind then no objects are transfered.
Since DB2 V9.7, the user who restores the database by default is not granted with SYSADM and DBA authority after the database is restored. If the user happens to be different from the instance user in the source database, this will cause an issue like this.
The solution is db2set DB2_RESTORE_GRANT_ADMIN_AUTHORITIES=YES then run the restore again.
INSERT could be slowed down by latch contention in “Adding Index Key” operations