SQL Server 2012 Memory Error Recovery.

Just found this on Memory Error Recovery in SQL Server 2012 – http://www.sqlskills.com/blogs/glenn/memory-error-recovery-in-sql-server-2012/

SQL Server 2014 New Features.

SQL Server 2014 New Features

– Column store indexes are updatable

– Bufferpool can be extended onto SSD drives for read only pages

– Manual/Automatic Backups can be stored in the Azure cloud and restored to a Azure VM.

– New SSMS Migration Wizard for Windows Azure Infrastructure Services

– AlwaysOn now allows up to 8 readable secondaries and online indexing  Also a new SSMS Wizard to deploy AlwaysOn secondaries to an Azure VM

– With Windows 2012 R2 up to 640 logical processors and 4TB of memory in a physical environment and up to 64 virtual processors and 1TB of memory per VM


– The SELECT INTO statement can be executed in parallel

– Stored Procedure with the “WITH NATIVE_COMPILATION” directive. These require schema binding (SCHEMABINDING directive), “EXECUTE AS OWNER” and must be in atomic block context (BEGIN ATOMIC WITH (TRANSACTION ISOLATION LEVEL = SNAPSHOT, LANGUAGE = N’us_english’)..END;.

– Support for Windows 2012 R2 Storage Spaces and network virtualization.

– Online index rebuild for a single partition and switch partitions in/out with DBA assigned lock priorities.


– Rebuild partition 4, wait for schema modification lock for up to 5 minutes.

– Kill queries causing the index rebuild to block, SELF means fail if there are blockers, NONE (default) mean wait for before.

– For AlwaysOn when the primary drops offline or the cluster loses quorum the readonly secondaries no longer go offline. Normally AlwaysOn AG connections go through the AG listener name, and then fetch the list of readable replicas from the primary, in this case they would have to connect directly to the readonly secondaries for reporting purposes.

– Failover Cluster Support for Clustered Shared Volumes

– On-premise SQL Server with data/log files in Azure storage.

ON (NAME = x_dat, FILENAME = ‘https://internetstorage.windows.net/data/x.mdf’ )
LOG ON (NAME = x_log, FILENAME = ‘https://internetstorage.windows.net/data/xlog.ldf’);

– Better cardinality estimator (more than 200 histograms in data distributions)

– Resource Governor for IO

– integrated BI functionality for Office 365: Power BI for Office 365. Previous add-on options Data Explorer and Geoflow have been integrated into the BI Suite and have been renamed as Power Query (Data Explorer) and Power Map (Geoflow).

– Statistics at the partition level

– Enhancements to the algorithm that determines when statistics are out of date so statistics are invalidated sooner.

– For CTP1 Natively complied stored procedures have sort operator limitations – only up to  8000 rows and it cannot sort aggregate values and expressions not appearing verbatim in the GROUP BY list.

– Natively compiled stored procedures fail casting nchar(n) and binary(n) types to a larger fixed-length type.

– Running scripts of memory-optimized tables scripted via SSMS may fail if the scripting option Include Collation is not enabled

– Memory-optimized table row count invalid in SMO and SSMS table properties – it is returned as 0

– When querying the DMV sys.dm_db_xtp_hash_index_stats, the query returns multiple rows per hash index, in case the table has multiple nonclustered hash indexes. In addition, the index name reported in the column index_name is incorrect.

– The DMV sys.dm_xtp_memory_stats may report less memory than is actually allocated and used by memory-optimized objects, when the user does not have permission on all the memory-optimized objects on the instance, as well as VIEW SERVER STATE and VIEW DATABASE STATE permissions.

– Inserting a large batch of rows into a memory optimized table in a single transaction may lead to unbounded log growth

In SQL Server 2014 CTP 1 there is no option to limit the memory consumed by memory-optimized tables and related structures – it can hang the server. In CTP2, you will have an option to use Resource Governor to limit the memory consumed by memory-optimized tables.

– Product Documentation is at http://msdn.microsoft.com/en-us/library/bb418471%28v=sql.10%29.aspx

MS SQL Server 2014 In-Memory OLTP Whitepaper.

SQL Server 2014 In-Memory OLTP Whitepaper – http://download.microsoft.com/download/F/5/0/F5096A71-3C31-4E9F-864E-A6D097A64805/SQL_Server_Hekaton_CTP1_White_Paper.pdf

– In-memory tables can also be SCHEMA_ONLY so that only the table schema is durable.

– In the event of an AlwaysOn failover data is also lost

– Memory optimized tables have hash indexes rather than B-tree indexes – linked lists of rows that hash to the same value

– CTP2 should also support range indexes via BW trees

– Indexes are not stored on disk, rebuilt as data is streamed into memory.

– Normal tables even with snapshot isolation do acquire locks during DML, in-memory tables acquire no locks.

– Native compiled Stored procs but there are limitations e.g. that can only access in-memory tables not disk based tables.

– Currently any indexes on memory-optimized tables can only be on columns using a Windows (non-SQL) BIN2 collation and natively compiled procedures only support comparisons, sorting, and grouping on those same collations.

– Memory optimized tables can only contain the following data types:

All integer types: tinyint, smallint, int, bigint
All money types: money, smallmoney
All floating types: float, real
date/time types: datetime, smalldatetime, datetime2, date, time
numeric and decimal types
All non-LOB string types: char(n), varchar(n), nchar(n), nvarchar(n), sysname
Non-LOB binary types: binary(n), varbinary(n)

– Create a filegroup with the “CONTAINS MEMORY_OPTIMIZED_DATA clause



– Tables MUST have at least 1 index at this is what combines the rows into a table

– The index can be an index to support a primary key with a bucket count [Name] varchar(32) not null PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1024)

– Nonclustered indexes are not available in CTP1.

– There is a seperate DLL loaded to handle data manipulation on in-memory tables.

– Additional limitations are:

No DML triggers
No FOREIGN KEY or CHECK constraints
No IDENTITY columns
No UNIQUE indexes other than for the PRIMARY KEY
A maximum of 8 indexes, including the index supporting the PRIMARY KEY

– No schema changes once a table is created, no alter table and no index DDL, all indexes to be included in the create table command

– A row is

8 byte begin timestamp
8 byte end timestamp
4 byte statement id
2 byte index link count
8 bytes * number of indexes – these form the linked list for each index.
row contents – key columns + other columns

– A hash index is a array of pointers each one points to the first row for that bucket then the linked list of pointers on each row is followed.

– It appears from the whitepaper that newly inserted rows go on the front of the chain, makes sense.

– Bucket size should be same as the number of unique values, uses memory and is rounded up to the next power of two.

– Transactions have a logical read time and only see rows whose start/end timestamps are valid for that time.

– Supported isolation levels are SNAPSHOT, REPEATABLE READ and SERIALIZABLE.

– Isolation level READ COMMITED is only supported for autocommit transactions

– Isolation level READ_COMMITED_SNAPSHOT is only supported for auto-commit transactions which also do not access disk based tables

– Transactions that are started using interpreted Transact-SQL with SNAPSHOT isolation cannot access memory-optimized tables.

– Transactions that are started using interpreted Transact-SQL with either REPEATABLE READ or SERIALIZABLE isolation must access memory-optimized tables using SNAPSHOT isolation.

– The end timestamp on a row also contains a extra bit which indicates is there is an active transaction on the row and write-write conflicts cause the later transaction to fail immediately.

– At commit time a transaction generates an end timestamp and enters validation where checks are done to ensure isolation levels are not violated and if so the transaction is aborted. This is also where a transaction writes to disk.

– Generally 1 log record which contains a write set of insert/delete operations (timestamps + row versions). Multiple log record can be generated if the transaction is large enough. 1 log record per write set not per row.

– Once the log record is “harden to storage” post processing occurs for each write set. For deletes update the end timestamp and clear the active bit on the end timestamp. For insert set the begin timestamp to end timestamp of transaction and clear the active flag on the begin timestamp.

– Garbage collections unlinks old row versions.

– Post processing can cause transactions to abort with new errors – error 41305 (“The current transaction failed to commit due to a repeatable read validation failure on table X.”) or error 41325.

– Memory-optimized table types and table variables are not supported in CTP1.

– AlwaysOn, log shipping, and database backup and restore are fully supported.

– Database mirroring and replication are not supported.

– SQL Server Management Studio and SSIS are supported.

Sybase ASE 15 Certification

Just found the Sybase ASE 15 Certification with Exam Topics –



Certification Exam Title: Sybase Certified Adaptive Server Administrator Associate (Version 15.0)
Passing Grade: 70%
Number of Multiple Choice Questions: 60
Exam Number: 510-025
Exam Duration: 90 Minutes


Section 1 – Adaptive Server Enterprise Overview (4 questions – 6%)

  • Understand Adaptive Server Enterprise Architecture
  • Understand the purpose of each system database
  • Describe the use of each system table
  • Understand Sybase utilities

Section 2 – Adaptive Server Enterprise New Features 15.0 (9 questions – 15%)

  • Understand the use of the new features in Adaptive Server Enterprise 15.0
  • Understand the new Query Optimization feature
  • Understand changes to Dump and Load
  • Describe any new utility features
  • Describe significant SQL changes
  • Understand how to create computed columns, indexes on computed columns and describe function based Indexes
  • Understand scrollable cursors such as cursor sensitivity, scrolling the worktable, and global variable usage
  • Describe the performance monitoring, diagnosis, and statistics such as query processing metrics, streamlined trace flags, XML format trace output, GUI Plan Viewer, changes to update statistics, and changes to sp_sysmon
  • Understand extensible new limits for new Adaptive Server Enterprise 15.0 datatypes such as bigint, unsigned int, smallint, unitext, unsigned bigint

Section 3 – Configuration, Upgrade & Installation (4 questions – 6%)

  • Understand how to manage Adaptive Server Enterprise 15.0 configuration parameters
  • Understand how to install and upgrade Adaptive Server Enterprise 15.0 from a previous version
  • Understand the environment variables used by Adaptive Server Enterprise 15.0
  • Understand server page sizes

Section 4 – Resource Allocation (4 questions – 6%)

  • Understand the use of file systems, raw partitions, and the dsync clause
  • Describe the partitioning of tables and indexes
  • Be familiar with how to plan, create and manage devices and databases
  • Have knowledge of capacity planning and system limits: for example, devices, databases, segments and transaction logs
  • Understand the system tables related to capacity planning
  • Understand how to manage tables and the details of the alter table subcommands
  • Understand the purposes of the Last Chance Threshold and Freespace Threshold, such as  how do you make use of them
  • Be acquainted with how to recreate the DDL from reading the storage-related system tables
  • Understand storage space requirements
  • Have knowledge of table creation and structure

Section 5 – Database and Logs (7 questions – 13%)

  • Describe how to manage databases and transaction logs
  • Understand how to perform logical database rebuilds
  • Understand how to use segments for performance and capacity planning
  • Understand the purposes and uses of system- and user-defined segments
  • Understand Relaxed Size Limitations
  • Understand Large Identifiers
  • Understand Version Migration
  • Understand the standard Adaptive Server Enterprise datatypes that are not included in Section 2

Section 6 – Backup and Recovery (6 questions – 11%)

  • Experience with how to manage the database and transaction log dumps
  • Understand how dumps are processed and what types of dump and load functionality is available in Adaptive Server Enterprise 15.0
  • Understand Cross-Platform Dump and Load
  • Understand dump compression
  • Understand how to perform dumps and loads for standby access
  • Understand how to enable up to the minute recovery
  • Understand all of the following: backup production databases, truncate the transaction log, recover from disasters, protect the master database, and use Sybase mirrorin

Section 7 – Cache Management (4 questions – 7%)

  • Understand how Adaptive Server Enterprise 15.0 manages memory, such as data cache(s), procedure cache sizing, and when are pages written to disk
  • Understand how to configure data caches and buffer pools
  • Describe the Statement Cache

Section 8 – Security (5 questions – 8%)

  • Be familiar with Adaptive Server Enterprise’s password security features
  • Understand how to manage passwords
  • Describe how to grant and revoke privileges in a granular fashion
  • Understand the use of system roles and user-defined roles and their precedence
  • Understand the difference and precedence between groups and user-defined roles
  • Understand the use of proxy authorization
  • Understand the use of access rules

Section 9 – Transaction Management (3 questions – 5%)

  • Understand transaction Management in Adaptive Server Enterprise, such as commits, rollbacks, locks, and deadlocks
  • Understand how isolation levels are implemented and when to use each one of them
  • Understand implicit and explicit transactions and transaction modes
  • Understand the difference between transactions and checkpoints
  • Understand the transaction log and minimally logged operations
  • Understand how Adaptive Server Enterprise manages pages in memory
  • Describe the use of the BCP utility

Section 10 – Dynamic Reconfiguration (4 questions – 6%)

  • Understand dynamic (re)configuration
  • Have knowledge of which configuration parameters are static or dynamic
  • Understand how dynamic (re)configuration uses memory

Section 11 – Parallel Performance (3 questions – 5%)

  • Have knowledge of how to keep partition statistics up-to-date
  • Understand how to read and interpret columns statistics and how to keep them up to date
  • Understand how to defragment tables efficiently
  • Be familiar with the various table-based and index-based access methods
  • Understand how to enable and utilize parallel processing

Section 12 – Locking and Concurrency (3 questions – 5%)

  • Understand the different lock types and their use by T-SQL commands
  • Understand how different locking schemes work and how to use them
  • Understand how to diagnose lock contention
  • Understand the effect of different lock types, as well as lock schemes on application concurrency
  • Understand lock promotion

Section 13 – Object Resource Management (3 questions – 5%)

  • Describe how to use table partitioning and the criteria for table partitioning
  • Describe the steps involved in creating partitions
  • Understand partition skew and how to rebalance a partitioned table
  • Understand how to keep partition statistics up-to-date