How do you enable a JDBC trace for a Java Stored Procedure or Java UDF?

http://www-01.ibm.com/support/docview.wss?uid=swg21884054&myns=swgimgmt&mynp=OCSSEPGG&mync=E&cm_sp=swgimgmt-_-OCSSEPGG-_-E

How do you enable a JDBC trace for a Java Stored Procedure or Java UDF?

Question

How do you enable a JDBC trace for a Java Stored Procedure or Java UDF?
Answer

Issue the following command on the database server:

db2set DB2_JVM_STARTARGS=”-Ddb2.jcc.override.traceLevel=-1 -Ddb2.jcc.override.traceFile=/tmp/jdbc_trace -Ddb2.jcc.override.traceFileAppend=true”

… where /tmp/jdbc_trace can be any path/file name that has write privilege by the DB2 fenced user id.

If the database manager configuration parameter KEEPFENCED is set to YES, then this command will require the DB2 instance to be recycled:

db2stop
db2start

To remove this parameter issue the command:

db2set DB2_JVM_STARTARGS=

… and then restart the DB2 instance again (assuming KEEPFENCED=YES is used).

Advertisements

Binding DB2 Base Packages for Various Versions

http://db2commerce.com/2015/04/28/binding-db2-base-packages-for-various-versions/?fb_action_ids=10206595788952587&fb_action_types=news.publishes&fb_ref=pub-standard&fb_source=other_multiline&action_object_map=%5B860575064014523%5D&action_type_map=%5B%22news.publishes%22%5D&action_ref_map=%5B%22pub-standard%22%5D

Binding DB2 Base Packages for Various Versions


DB2 10.5 Fixpack 4 Cancun New Features

Database product installation FAQ update to include Cancun http://www.smooth1.co.uk/installs/dbinstalls.html.

Also Cancun New Features document WIP http://www.smooth1.co.uk/db2/cancun.html


DB2 Tech Talk October 16 2014 DB2 Competitive Update

Highlights from Chris Eaton’s presentation at http://www.idug-db2.com/?commid=128257

DB2 BLU, not all active data needs to fit in memory to get the analysis done.

This means a smaller machine can be used which does not have so many cores and hence less database licenses are needed.

BLU cheif architect Sam Lightstone “Memory is the new disk”.

Pack data into L1/L2/L3 cache to avoid going to main memory if not required.

Data is packed into register length encodings so avoid register manipulations when fetching from main memory.

Work on data whilst it is compressed in memory, equality,range,joins,aggregation is done on compressed data.

Order of magnitude gains over products which need to decompress to process the data.

BLU will use all cores and 90% core utilization or more when queries are running even 32 cores.

Data skipping tells you which blocks do NOT contain data, min/max values per chunk of data.

Data skipping in unique in BLU – even skips main memory access vs Exadata storage indexes which just avoid ios.

Shadow tables – column based copy of row based tables kept in sync by CDC Replication.

Reports can be run on OLTP system adhoc ahdoc to get current results with no application changes.

Analytics are pointed at the row based table but optimizer will used the BLU column based copy.

No analytical indexes needed on BLU tables.

No great slowdown to OLTP as does not need to update analytical indexes.

No replication needed to another reporting system.

When running SAP BW 7.4 and DB2 Cancun more objects can be used with BLU.

Oracle only has columar version in-memory hence on startup the columar version needs to be rebuilt.
Also the DBA needs to decide which tables/partitions/columns are hot and need to be in-memory vs DB2 optimizer deciding which items are hot. On Oracle the DBA needs to decide which level of compression to use and as Oracle needs to decompress the data to work on it as more compression is used the overhead goes up.
Chris Eaton said Oracle default compression is “memcompress for query low” which is 2x compression vs BLU 10x.compression.

Oracle is choose which tables/partitions/columns vs DB2 BLU create shadow table on disk with 10x compression.

Oracle in-memory is an addon on top of Enterprise Edition, will also probably need partitioning option as well.

BLU vs Hana, tables in Hana are row or column not both like shadow tables.Hana column store is only columar no shadow tables. There is a row based input buffer, inserts/update/deletes go into a row-based in-memory L1 differential cache, then a columnar L2 cache, then from there to the main store.
DB2 can store as row or column on disk and can do shadow tables as well.

HANA uses multiple compression techniques – dictionary encoding, run length encoding, cluster encoding. Equality comparisons can be done on compressed data.

HANA requires all active data to fit in memory, an entire column from a single partition must go into memory, otherwise swapping.

HANA is Linux only, no AIX, at Insight there will be more annoucements about DB2 BLU platforms.

HANA reads go to column store + have to including differential store.

For HANA differential store merges require a checkpoint, hence extra merging/checkpointing vs BLU going directly into the columnar store, more memory/cpu needed.

->L1 delta (row) -> log + merge-> L2 delta (column) -> merge into main store

SAP Sizing for HANA http://scn.sap.com/community/hana-in-memory/blog/2013/05/27/andy-silvey–sap-hana-command-line-tools-and-sql-reference-examples-for-netweaver-basis-administrators

DB2 Purescale supports rolling updates, oracle patchsets are not rolling upgradable and do not support standby first. Need PSU/CPU for rolling/standby first.
https://oracleus.activeevents.com/2013/connect/fileDownload/session/EB07737AFDDC725F997443DF2E19F305/CON8413_Meritt-OOW%20Presentation%20September%202013%20v4.pptx

Purescale with sockets is supported with VMWare/KVM,VMWare any,KVM Redhat 6.2 higher.
DB2 support reporting bugs in VMWare, Oracle require you reproduce the bug in the native OS rather than VMWare to get support. DB2 supported sub-capacity license, only the cores you use.

Next Tech Talk is what is “What’s new in DB2 10.5 Cancun” from Sam Lightstone and Christian Garcia-Arellano 20 November

IBM Recommendation for BLU set DB2_WORKLOAD=ANALYTICS and use autoconfigure with 80% of machine memory for a dedicated server. 1/2 memory of that for bufferpool, 1/2 or more for sorting depending on workload. sort memory is not dynamic. With shadow tables, the server is still OLTP so do not set DB2_WORKLOAD=ANALYTICS but use a seperate bufferpool for columnar shadow tables.

Column-organized tables do not need reorg except to free unused extents.

Shadow table – knowledge centre for alter/drop. If table gets dropped the shadow table is dropped.

BLU/Shadow tables are in single partition instances only.

CDC scrapes log buffer and replicates to shadow tables, 4-7% overhead – equal/less than updating analytical indexes. DBA needs to create subscription+target mapping + replication monitoring.

OLTP quick queries gets higher priority over BLU queries with Workload Management,BLU automaically turns on Workload Management.


DB2 transport schemas with DB2_RESTORE_GRANT_ADMIN_AUTHORITIES

When transporting schemas via a restore if the instance owner does not have permission to bind then no objects are transfered.

Since DB2 V9.7, the user who restores the database by default is not granted with SYSADM and DBA authority after the database is restored. If the user happens to be different from the instance user in the source database, this will cause an issue like this.

The solution is db2set DB2_RESTORE_GRANT_ADMIN_AUTHORITIES=YES then run the restore again.

http://www-01.ibm.com/support/docview.wss?uid=swg21685941&myns=swgimgmt&mynp=OCSSEPGG&mync=E


DB2 recreating autostorage files.

http://www-01.ibm.com/support/docview.wss?uid=swg21682091&myns=swgimgmt&mynp=OCSSEPGG&mync=E


DB2 Insert Performance – tracking Page Splits

INSERT could be slowed down by latch contention in “Adding Index Key” operations

http://www-01.ibm.com/support/docview.wss?uid=swg21681439&myns=swgimgmt&mynp=OCSSEPGG&mync=E