SQLBits Free Day 8th March 2015Posted: March 9, 2015 Filed under: Microsoft SQL Server, Microsoft SQL Server future | Tags: SQL Server future 1 Comment
Brent Ozar’s talk on SQL Server 2014
With Delayed Durability you need to stop ALL workloads and run sys.sp_flushlog as part of all shutdowns even PLANNED shutdowns. A normal shutdown does NOT flush the log!!
“SQL Server does not provide any durability guarantees other than durable transactions and sp_flush_log.”
“SQL Server shutdown and restart
For delayed durability, there is no difference between an unexpected shutdown and an expected shutdown/restart of SQL Server. Like catastrophic events, you should plan for data loss. In a planned shutdown/restart some transactions that have not been written to disk may first be saved to disk, but you should not plan on it. Plan as though a shutdown/restart, whether planned or unplanned, loses the data the same as a catastrophic event.”
Check CSS SQL Server Engineers blog article by Bob Dorr (a hidden gem in tempdb performance in SQL Server 2014) + Kendra Little’s article.
SQL Server 2012 SP1 (2298 transactions/sec on SSD + just top 5 writes = 1480.9 MB) vs 2014 RTM (3266 transactions/sec + total writes 126.3MB). “that IO just about disappears in SQL Server 2014! That adds up to a notable improvement in transactions per minute.”!!
This change has been backported to SQL Server 2012 CU10.
Connor Cunningham’s talk on Query Store (usual caverts for unreleased items applies!)
The other articles I have seen on Query Store are:
Brent Ozar November 4, 2014
Benjamin Nevarez Feb 19th, 2015
The default time the entry stay in the query store is 1hr.
Connor created a table with the 10,000 rows containing the values 1->10,000 (highly selective!) and 100,000 rows containing the same value (not at all selective!).
SSMS contains a GUI where a query can be chosen from the query store and you can view a task manager like graph for performance over time. 2 plans appear on the graph (fast/slow) depending upon which plan is chosen. You can force the ‘good’ plan every time.
When you force the plan the next run of the SQL forces a recompile.
FYI At the point a 3rd plan appeared on the query but this was just because the optimizer generates 2 plans, 1 mentioning the a missing index.
The query store will be policy driven (e.g which database you want enabled, whether new queries appear, when to purge queries from the query store) with an auto policy.
The plan is for the query store to help with upgrades and to handle 98% for upgrade issues.
This will allow people to upgrade faster and they can handle optimizer change which cause issues quickly via the query store.
The original plan was for optimizer fixes to be rolled up and make the default in the next version WHEN THE DATABSE COMPATABILITY LEVEL WAS INCREASED.
This stopped happening as large customers a) wanted stabilitiy b) when they had issues these tended to get escalated to Connor who wants to sleep/play golf more!
Rather than people having to enable trace flag 4199 to pickup the latest optimizer fixes, Connor can make these the default in the next version as planned and users can quickly use query store to handle optimizer regressions.
Obviously Connor can saw what will appea rin a given future SQL Server versions but watch for vNext public beta!
Plan guides will overide the query store forcing plans.
Query Store will be in all Editions.
Policies can be use to limit what going into the query store (repeated queries,> a given cost,everything) controlled by policies.
Query store data is stored in the database itself so when you failover a database mirror or Always On AG setup the query store information is still there. Query Store information also survive a server restart.
The query store will track which optimizer version was used so you can tell if a query was optimized on the ‘old’ or ‘new’ version when doing upgrades.
You will need dbo writes to admin the query store and “view database state” to view the query store.
The query store feature will NOT be backported to SQL Server 2012 as file format, performance tasks, on-disk format changes.
For a given query store there a multiple tables, the query text + plan are stored once which multiple runtime execution entries.
In order to store query text only once the server need to detect indentical queries ,this is done via an md5 checksum (the same checksum used to determine if the query matches an entry in the plan cache). The chance of dups is something like 2 to the power 100 (or was it 200) and no unplanned matches (hash collisions) has been noticed in all the testing including some testing by big customers who have been working with Microsoft on the new feature.
Comments/spaces at the end of the query text are REMOVED before the matching process against existing queries and are not stored in the query store. Comments and space in the middle of the query text ARE significant and the matching is case sensitive.
Different context settings for a given query (ANSI NULLs on/offm,collation) can affect the query plan chosen and are stored as seperate entries in the query store.
I did not get a note of the tables/table relationships in the query store but it is something like
1 query text -> >=1 set of options -> >=1 query exections
If yoru SQL Server sits in a VM then although VM config change e.g. less memory for the server may affect query performance VM config changes are not noticed by the query store.
To handle the situation where multiple procs contain the same query text (e.g. “select * from table t” in different stored procedures) a batch_sql_handle appears in the query store to treat these as different queries.
On update to the slides (breaking news!) is that object_name rather than object_id will be used (I’ll update which slide when they appear) to handle the situation where a stored procedure is dropped/recreated. A customer wanted to be able to realised these were the same stored procedure!
Runtime stats are kept for 367 days by default (handle 1 year + 1 day even on leap years), 1 more than Oracle which Connor thinks in better! Values vary but the defaults would given a query store between 100MB for small low use systems to 3GB+ for larger systems).
Even adhoc queries can be picked up by the queries store.
For dynamic sql (e.g. sp_executesql) the parameter values are put in the query store not just the parameter names/types which appear in the plan cache dmvs. NOTE: Currently you need to expand the plan as an XML document and parse the xml to get the parameter values, the query decodes this for you into tables which can be directly queried.
[ My notes are unclear here something like if drop index force fails you can check the query store for queries??]
Security sensitive items e.g. logins/passwords do not currently go in the plan cache but appear in the query store with the sensitive part of the query text redacted.
For linked servers currently the runtime stats appear only in the query store on the remote server and are not pulled back to the source server where the sql was run.
Queries are still captured if option recompile in on.
Nice writeup! One tweak about flushing the log – that only applies to delayed durability, not Hekaton.