MS SQL Server 2014 In-Memory OLTP Whitepaper.

SQL Server 2014 In-Memory OLTP Whitepaper –

– In-memory tables can also be SCHEMA_ONLY so that only the table schema is durable.

– In the event of an AlwaysOn failover data is also lost

– Memory optimized tables have hash indexes rather than B-tree indexes – linked lists of rows that hash to the same value

– CTP2 should also support range indexes via BW trees

– Indexes are not stored on disk, rebuilt as data is streamed into memory.

– Normal tables even with snapshot isolation do acquire locks during DML, in-memory tables acquire no locks.

– Native compiled Stored procs but there are limitations e.g. that can only access in-memory tables not disk based tables.

– Currently any indexes on memory-optimized tables can only be on columns using a Windows (non-SQL) BIN2 collation and natively compiled procedures only support comparisons, sorting, and grouping on those same collations.

– Memory optimized tables can only contain the following data types:

All integer types: tinyint, smallint, int, bigint
All money types: money, smallmoney
All floating types: float, real
date/time types: datetime, smalldatetime, datetime2, date, time
numeric and decimal types
All non-LOB string types: char(n), varchar(n), nchar(n), nvarchar(n), sysname
Non-LOB binary types: binary(n), varbinary(n)

– Create a filegroup with the “CONTAINS MEMORY_OPTIMIZED_DATA clause



– Tables MUST have at least 1 index at this is what combines the rows into a table

– The index can be an index to support a primary key with a bucket count [Name] varchar(32) not null PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1024)

– Nonclustered indexes are not available in CTP1.

– There is a seperate DLL loaded to handle data manipulation on in-memory tables.

– Additional limitations are:

No DML triggers
No FOREIGN KEY or CHECK constraints
No IDENTITY columns
No UNIQUE indexes other than for the PRIMARY KEY
A maximum of 8 indexes, including the index supporting the PRIMARY KEY

– No schema changes once a table is created, no alter table and no index DDL, all indexes to be included in the create table command

– A row is

8 byte begin timestamp
8 byte end timestamp
4 byte statement id
2 byte index link count
8 bytes * number of indexes – these form the linked list for each index.
row contents – key columns + other columns

– A hash index is a array of pointers each one points to the first row for that bucket then the linked list of pointers on each row is followed.

– It appears from the whitepaper that newly inserted rows go on the front of the chain, makes sense.

– Bucket size should be same as the number of unique values, uses memory and is rounded up to the next power of two.

– Transactions have a logical read time and only see rows whose start/end timestamps are valid for that time.

– Supported isolation levels are SNAPSHOT, REPEATABLE READ and SERIALIZABLE.

– Isolation level READ COMMITED is only supported for autocommit transactions

– Isolation level READ_COMMITED_SNAPSHOT is only supported for auto-commit transactions which also do not access disk based tables

– Transactions that are started using interpreted Transact-SQL with SNAPSHOT isolation cannot access memory-optimized tables.

– Transactions that are started using interpreted Transact-SQL with either REPEATABLE READ or SERIALIZABLE isolation must access memory-optimized tables using SNAPSHOT isolation.

– The end timestamp on a row also contains a extra bit which indicates is there is an active transaction on the row and write-write conflicts cause the later transaction to fail immediately.

– At commit time a transaction generates an end timestamp and enters validation where checks are done to ensure isolation levels are not violated and if so the transaction is aborted. This is also where a transaction writes to disk.

– Generally 1 log record which contains a write set of insert/delete operations (timestamps + row versions). Multiple log record can be generated if the transaction is large enough. 1 log record per write set not per row.

– Once the log record is “harden to storage” post processing occurs for each write set. For deletes update the end timestamp and clear the active bit on the end timestamp. For insert set the begin timestamp to end timestamp of transaction and clear the active flag on the begin timestamp.

– Garbage collections unlinks old row versions.

– Post processing can cause transactions to abort with new errors – error 41305 (“The current transaction failed to commit due to a repeatable read validation failure on table X.”) or error 41325.

– Memory-optimized table types and table variables are not supported in CTP1.

– AlwaysOn, log shipping, and database backup and restore are fully supported.

– Database mirroring and replication are not supported.

– SQL Server Management Studio and SSIS are supported.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s