DB2 Tech Talk October 16 2014 DB2 Competitive Update

Highlights from Chris Eaton’s presentation at http://www.idug-db2.com/?commid=128257

DB2 BLU, not all active data needs to fit in memory to get the analysis done.

This means a smaller machine can be used which does not have so many cores and hence less database licenses are needed.

BLU cheif architect Sam Lightstone “Memory is the new disk”.

Pack data into L1/L2/L3 cache to avoid going to main memory if not required.

Data is packed into register length encodings so avoid register manipulations when fetching from main memory.

Work on data whilst it is compressed in memory, equality,range,joins,aggregation is done on compressed data.

Order of magnitude gains over products which need to decompress to process the data.

BLU will use all cores and 90% core utilization or more when queries are running even 32 cores.

Data skipping tells you which blocks do NOT contain data, min/max values per chunk of data.

Data skipping in unique in BLU – even skips main memory access vs Exadata storage indexes which just avoid ios.

Shadow tables – column based copy of row based tables kept in sync by CDC Replication.

Reports can be run on OLTP system adhoc ahdoc to get current results with no application changes.

Analytics are pointed at the row based table but optimizer will used the BLU column based copy.

No analytical indexes needed on BLU tables.

No great slowdown to OLTP as does not need to update analytical indexes.

No replication needed to another reporting system.

When running SAP BW 7.4 and DB2 Cancun more objects can be used with BLU.

Oracle only has columar version in-memory hence on startup the columar version needs to be rebuilt.
Also the DBA needs to decide which tables/partitions/columns are hot and need to be in-memory vs DB2 optimizer deciding which items are hot. On Oracle the DBA needs to decide which level of compression to use and as Oracle needs to decompress the data to work on it as more compression is used the overhead goes up.
Chris Eaton said Oracle default compression is “memcompress for query low” which is 2x compression vs BLU 10x.compression.

Oracle is choose which tables/partitions/columns vs DB2 BLU create shadow table on disk with 10x compression.

Oracle in-memory is an addon on top of Enterprise Edition, will also probably need partitioning option as well.

BLU vs Hana, tables in Hana are row or column not both like shadow tables.Hana column store is only columar no shadow tables. There is a row based input buffer, inserts/update/deletes go into a row-based in-memory L1 differential cache, then a columnar L2 cache, then from there to the main store.
DB2 can store as row or column on disk and can do shadow tables as well.

HANA uses multiple compression techniques – dictionary encoding, run length encoding, cluster encoding. Equality comparisons can be done on compressed data.

HANA requires all active data to fit in memory, an entire column from a single partition must go into memory, otherwise swapping.

HANA is Linux only, no AIX, at Insight there will be more annoucements about DB2 BLU platforms.

HANA reads go to column store + have to including differential store.

For HANA differential store merges require a checkpoint, hence extra merging/checkpointing vs BLU going directly into the columnar store, more memory/cpu needed.

->L1 delta (row) -> log + merge-> L2 delta (column) -> merge into main store

SAP Sizing for HANA http://scn.sap.com/community/hana-in-memory/blog/2013/05/27/andy-silvey–sap-hana-command-line-tools-and-sql-reference-examples-for-netweaver-basis-administrators

DB2 Purescale supports rolling updates, oracle patchsets are not rolling upgradable and do not support standby first. Need PSU/CPU for rolling/standby first.
https://oracleus.activeevents.com/2013/connect/fileDownload/session/EB07737AFDDC725F997443DF2E19F305/CON8413_Meritt-OOW%20Presentation%20September%202013%20v4.pptx

Purescale with sockets is supported with VMWare/KVM,VMWare any,KVM Redhat 6.2 higher.
DB2 support reporting bugs in VMWare, Oracle require you reproduce the bug in the native OS rather than VMWare to get support. DB2 supported sub-capacity license, only the cores you use.

Next Tech Talk is what is “What’s new in DB2 10.5 Cancun” from Sam Lightstone and Christian Garcia-Arellano 20 November

IBM Recommendation for BLU set DB2_WORKLOAD=ANALYTICS and use autoconfigure with 80% of machine memory for a dedicated server. 1/2 memory of that for bufferpool, 1/2 or more for sorting depending on workload. sort memory is not dynamic. With shadow tables, the server is still OLTP so do not set DB2_WORKLOAD=ANALYTICS but use a seperate bufferpool for columnar shadow tables.

Column-organized tables do not need reorg except to free unused extents.

Shadow table – knowledge centre for alter/drop. If table gets dropped the shadow table is dropped.

BLU/Shadow tables are in single partition instances only.

CDC scrapes log buffer and replicates to shadow tables, 4-7% overhead – equal/less than updating analytical indexes. DBA needs to create subscription+target mapping + replication monitoring.

OLTP quick queries gets higher priority over BLU queries with Workload Management,BLU automaically turns on Workload Management.

Advertisements


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s