SQL Relay London 30 October 2014

My slides from today are at http://www.smooth1.co.uk/sqlserver2014/2014NewFeatures.zip

Notes from Christian Boltons talk Understanding Parallelism

Troubleshoot at task level rather than session level.

session->task->worker thread->scheduler->Logical CPU

Query cost value is the same across all sql editions

The decision to use parallelism is at runtime not optimization time e.g. if not enough worker threads

To get the average query cost look at the the xml plan from the plan cache

Gather streams maintains sort order

Other parallelism operators are Repartition (Many to Many) and Distribute (One to Many)

Each parallel task is really a producer and a consumer as seen in dm_os_waiting_tasks

At startup -P option can adjust the number of schedulers to allow you to see how the optimizer will behave even on hardware which does not have that number of cpus, check from dm_os_schedulers.

Bitmap operators only appear in a parallel plan and can be used as a predicate to filter against another query plan leg when joining. This means estimates rows will NOT match actual rows as filtering is done at runtime hence is not in the estimate.

Does the query processor wait for the full bitmap to be built?
With parallism 8 do you get 8 bitmaps?

Do not set degree of parallelism 1 at the server level and not think about the impact – affects dbcc checkdb,index rebuilds.

Check dm_os_sys_info. Each worker thread takes up 2MB stack outside the sql server memory.

CXPACKET wait just means 1 of the parallel tasks is finished and waiting for the others to complete – not necessarily bad, just means lots of long running parallel queries.

Can adjust query cost threshold for parallelism,default is 5, increase to reduce number of parallel plan executions if this is flooding the server.

Claire Mora – T-SQL Performance Issues : What you should know to find and resolve them

Consider sql cruise, may be next year!

Use a Row Generator to create sample test data

Avoid parallel plans with a sort.

For complex queries consider Divide and Conquer http://www.sqlservercentral.com/articles/T-SQL/107611/

Data Bubbles – thick lines in ssms query plan

Quirky Update – http://www.sqlservercentral.com/articles/T-SQL/68467/

Test against all Version and Editions planned for deployment.

Advertisement

SQL Server 2014 CU4 is out – new trace flag.

SQL Server 2014 CU4 is out – http://support.microsoft.com/kb/2999197

http://phoebix.com/2014/10/21/cumulative-updates-sql-server-2014-rtm-cu4/ also mentions a new trace flag

For clustered column store index

” This fix introduces a new trace flag 9348 that helps in deciding whether or not a bulk insert is required. If the estimated number of rows is greater than 102400 than a bulk insert is used. If less than 102400 then SQL Server does not use a bulk insert.”


SQL in the City London 24 October 2014

Slides for my lightening talk are at  http://www.smooth1.co.uk/sqlserver2014/SITC14_Slides.pptx

Items I learnt from SQL Server tips and tricks – Ike Ellis

Top right of most MS product is a quick launch search box similar to Windows 8 menu search.

Redgate SQL Prompt – in Powershell press Ctrl+K Ctrl+Y to prettify sql code

SQL Server Central has a code prettifier http://extras.sqlservercentral.com/prettifier/prettifier.aspx

WordPress has a code Formatter http://en.support.wordpress.com/code/posting-source-code/

select j.job_location,c.client_name 
from job j,client c 
where j.job_id=1 and 
j.client_id=c.client_id

DB2 10.5 Fixpack 4 known issues.

1 known issue – db2 update db cfg error http://www-01.ibm.com/support/docview.wss?uid=swg21686677&myns=swgimgmt&mynp=OCSSEPGG&mync=E


DB2 Tech Talk October 16 2014 DB2 Competitive Update

Highlights from Chris Eaton’s presentation at http://www.idug-db2.com/?commid=128257

DB2 BLU, not all active data needs to fit in memory to get the analysis done.

This means a smaller machine can be used which does not have so many cores and hence less database licenses are needed.

BLU cheif architect Sam Lightstone “Memory is the new disk”.

Pack data into L1/L2/L3 cache to avoid going to main memory if not required.

Data is packed into register length encodings so avoid register manipulations when fetching from main memory.

Work on data whilst it is compressed in memory, equality,range,joins,aggregation is done on compressed data.

Order of magnitude gains over products which need to decompress to process the data.

BLU will use all cores and 90% core utilization or more when queries are running even 32 cores.

Data skipping tells you which blocks do NOT contain data, min/max values per chunk of data.

Data skipping in unique in BLU – even skips main memory access vs Exadata storage indexes which just avoid ios.

Shadow tables – column based copy of row based tables kept in sync by CDC Replication.

Reports can be run on OLTP system adhoc ahdoc to get current results with no application changes.

Analytics are pointed at the row based table but optimizer will used the BLU column based copy.

No analytical indexes needed on BLU tables.

No great slowdown to OLTP as does not need to update analytical indexes.

No replication needed to another reporting system.

When running SAP BW 7.4 and DB2 Cancun more objects can be used with BLU.

Oracle only has columar version in-memory hence on startup the columar version needs to be rebuilt.
Also the DBA needs to decide which tables/partitions/columns are hot and need to be in-memory vs DB2 optimizer deciding which items are hot. On Oracle the DBA needs to decide which level of compression to use and as Oracle needs to decompress the data to work on it as more compression is used the overhead goes up.
Chris Eaton said Oracle default compression is “memcompress for query low” which is 2x compression vs BLU 10x.compression.

Oracle is choose which tables/partitions/columns vs DB2 BLU create shadow table on disk with 10x compression.

Oracle in-memory is an addon on top of Enterprise Edition, will also probably need partitioning option as well.

BLU vs Hana, tables in Hana are row or column not both like shadow tables.Hana column store is only columar no shadow tables. There is a row based input buffer, inserts/update/deletes go into a row-based in-memory L1 differential cache, then a columnar L2 cache, then from there to the main store.
DB2 can store as row or column on disk and can do shadow tables as well.

HANA uses multiple compression techniques – dictionary encoding, run length encoding, cluster encoding. Equality comparisons can be done on compressed data.

HANA requires all active data to fit in memory, an entire column from a single partition must go into memory, otherwise swapping.

HANA is Linux only, no AIX, at Insight there will be more annoucements about DB2 BLU platforms.

HANA reads go to column store + have to including differential store.

For HANA differential store merges require a checkpoint, hence extra merging/checkpointing vs BLU going directly into the columnar store, more memory/cpu needed.

->L1 delta (row) -> log + merge-> L2 delta (column) -> merge into main store

SAP Sizing for HANA http://scn.sap.com/community/hana-in-memory/blog/2013/05/27/andy-silvey–sap-hana-command-line-tools-and-sql-reference-examples-for-netweaver-basis-administrators

DB2 Purescale supports rolling updates, oracle patchsets are not rolling upgradable and do not support standby first. Need PSU/CPU for rolling/standby first.
https://oracleus.activeevents.com/2013/connect/fileDownload/session/EB07737AFDDC725F997443DF2E19F305/CON8413_Meritt-OOW%20Presentation%20September%202013%20v4.pptx

Purescale with sockets is supported with VMWare/KVM,VMWare any,KVM Redhat 6.2 higher.
DB2 support reporting bugs in VMWare, Oracle require you reproduce the bug in the native OS rather than VMWare to get support. DB2 supported sub-capacity license, only the cores you use.

Next Tech Talk is what is “What’s new in DB2 10.5 Cancun” from Sam Lightstone and Christian Garcia-Arellano 20 November

IBM Recommendation for BLU set DB2_WORKLOAD=ANALYTICS and use autoconfigure with 80% of machine memory for a dedicated server. 1/2 memory of that for bufferpool, 1/2 or more for sorting depending on workload. sort memory is not dynamic. With shadow tables, the server is still OLTP so do not set DB2_WORKLOAD=ANALYTICS but use a seperate bufferpool for columnar shadow tables.

Column-organized tables do not need reorg except to free unused extents.

Shadow table – knowledge centre for alter/drop. If table gets dropped the shadow table is dropped.

BLU/Shadow tables are in single partition instances only.

CDC scrapes log buffer and replicates to shadow tables, 4-7% overhead – equal/less than updating analytical indexes. DBA needs to create subscription+target mapping + replication monitoring.

OLTP quick queries gets higher priority over BLU queries with Workload Management,BLU automaically turns on Workload Management.


Informix Documentation Blog!

I just found the Informix Documentation Blog – https://www.ibm.com/developerworks/community/blogs/idsdoc/?lang=en

This included

– new spatial ’round earth’ model in 12.10.xC3
– timeseries data which is recorded at a regular subsecond frequency can use a ‘hertz’ timeseries which saves 10 bytes per record in 12.10.xC3
– numeric timeseries data recorded at a regular frequency can be compressed in 12.10.xC3


October 16 OTN Developer Day – 12c InMemory Database Highlights

Mike Appleyard presentation.

Can be used to change from batch processing to more realtime processing.

In memory is supported with multi tenant.

12c option goals

– 100x Faster Queries – Real-time Analytics

– Faster mixed workload oltp

– Transparent – no application changes

Transactions run faster on row format, analytics run faster on column format.

Both row and column format at the same time and transactional consistent.

2x to 20x compression for in-memory

In memory columnar format can be enabled at table or partition level and per column level by excluding columns.

Loading into the in memory area is first fit and once full there is no aging/LRU algorithm.

In memory enabled tables/partitions have priorties – critical is loaded before accessed.

Complex OLTP slowed by Analyic Indexes – potentially can remove indexes used only for analytical queries depends on your workload – less io/storage needed.

In-memory is supported with RAC.- 2 modes, either duplicate data on each node (engineered systems only) or partitional across nodes with multi-instance parallel query.

In-memory is supported with Dataguard but not Active Data Guard.

Flashback query/flashback transaction?

Instance Parameters

–  inmemory_query enabled/disable

– inmemory_clause_default

– inmemory_force

Use v$im_segments to monitor in-memory.

In-memory is a licensable option.

5 types of compression of in-memory, can be configured with alter table on a per partition level.

Currently release in-memory does query/bloom filtering but aggregation is done in the pga.

In-memory storage indexes – built on the fly in-memory segments which contain min/max values per compression units.

This are not the same as storage indexes for exadata which are per 1MB chunk.

Optimizer hint can be used to turn on/off im-memory.

Bloom filter – filters on multiple predicates in one pass rather than hash join on 1 predicate and rescanning the same data 4 times to apply each predicate.

RAC leaf nodes – no ORACLE_HOME or voting disk just compute nodes.thin install.- no locking LMS/LMON. readonly but transactional consistent in 11gr2.


October 13 Microsoft SQL Relay Reading.

Interesting talks at Microsoft SQL Relay Reading, the first SQL Relay event this year.

The Introduction to OLTP In memory from Amanda Ford and another lady from Microsoft covered some items I had not seen before:

1. The in-memory oltp filegroup preallocates more pages that you make think when you first create the filegroup – this will increase the size of your backup. A database with just the in-memory filegroup with no tables will result in a 5GB backup!

2. The in-memory filegroup is made up of pairs of files called checkpoint pairs – a datafile and a delta file.
The datafile contains inserts and the delta file deleted rows.

3. You want to keep 30% of your buckets free.

4. In-memory OLTP uses Multi Version Concurrency Control (MVCC), rows have a start and end timestamp and the current version has an end timestamp of infinity. There are a maximum of 4096 checkpoint pairs hence maximum is 250GB for in-memory data including previous versions.

5. To list the current checkpoint files query sys.dm_db_xtp_checkpoint_files.

6. New data files are listed as “UNDER CONSTRUCTION” under a checkpoint occurs.

7. Trace flag 1851 disables the automerge functionality for in-memory oltp.

8. To manually merge checkpoint files use call sys.sp_xtp_merge_checkpoint_files, status goes to PENDING, checkpoint.

9. Checkouts occur anyway every 512MB of log.

10. Checkpoint files can have a status of “REQUIRED FOR BACKUP”

11. Merge requests can be see by quering sys.dm_db_xtp_merge_requests, once the merge is complete the request_state_desc changes to installed.

12. Garbage collection can be forced by calling sys.sp_xtp_checkpoint_force_garbage_collection similar to sp_filestream_force_garbage_collection.

13. In-memory OLTP limitations include no MERGE statements, no EXISTS query, no MARG (Multiple Active Results Sets), no nested queries, no parallel plans.

14. To determine tables which are suitable to move to In-memory OLTP use AMP (Access,Migrate,Repeat).
There is a new Data Collection Set for this, Transaction Performance collect set and AMR Reports.


September 17 Oracle 12c event at Oracle uk

This time a specific Oracle 12c database event at Oracle uk.

This included varous upgrade scenarios including the fact that upgrades are done in upgrade mode which is similar to mount mode where the datafiles are NOT touched, only system catalogs.

This also covered using data guard to reduce downtime during upgrades and using Goldengate which can even replicate between very old versions and the later versions to reduce downtime during upgrades.


August 7 eDBA Oracle 12c 12.2.0.4 in memory event.

Interesting event with eDBA on August 7 covering Oracle 12c 12.2.0.4 which adds in-memory features.

This Oracle in-memory offering is columar and NOT persisted on disk, instead a new memory area is used to store the in-memory version of data.