Archive

Author Archive

Indexes – what they are

December 14, 2017 Leave a comment

Recently, a friend of mine working as a Database Analyst approached me with her notes on indexes asking questions. The notes, apparently, where coming from a different blend of “academic” sources, i.e. university textbooks or handouts or something of that sort. Neither of them were any good for practical application, since the main question was “What do I do in the end of the day, what indexes I build in which scenarios and why indexes improve performance at all?” None of these questions could have been answered by that “academic” material, which made me to decide to put up something in writing of my own on this matter, geared towards explaining what indexes are and what they are for and how to use them in a basic way in practice.

This article is definitely intended for non-professional DB audience, but somebody with basic understanding of database concepts, who doesn’t know much of the indexes but knows they increase the queries execution speed and seeks the information and practical guidance on how to apply them.

——————————————————————————————————————————————————-

Indexes come in two major types, CLUSTERED and NON-CLUSTERED. Clustered index can be only one per table, Non-Clustered indexes can be multiple per single table. Another important difference between Clustered and Non-Clustered indexes is that Clustered index is always unique, i.e. records in column(s) it’s built on cannot have repetitive values. Non-clustered index can be either unique or not, values in column(s) a Non-Clustered index is built on may have duplicates, depending on the index definition.

Non-clustered indexes can be as many as 249 per table for older SQL server versions and up to 999 for 2008+ versions.

Often, Clustered indexes are referred to as CIX and Non-clustered as NIX.

A table with CIX is called a Clustered Table, a table with no CIX is called a HEAP Table (a HEAP table may or may not have NIXes)

Common real-life presentation and application of indexes of both types is a phone book for CIXes and a book with index section at the end for NIXes in the following way:

– data in a phone book is organized in such a way, that a person’s contact record, i.e. First, Last, his/her phone number and other details are “tied” to or associated with the first letter
or letter range, ex A-C, D-F, G-I … X-Z. This is the phone book CIX and any relevant data describing a person’s full contact details is the actual record “tied” to this CIX
Like a typical phone book, there is only one CIX, i.e. one set of letters or their ranges, i.e. the single CIX, there is simply no other one. Exactly alike it’s much faster to lookup a phone book entry going by the phone book index VS scrolling through all the pages starting from the first one looking for it, the CIXes works the same way in drastically increasing the search operations

– on contrast, NIX has completely different underlying idea, it’s similar to a book index section, which has the reference to the page number where every key word in the book is located.
In this case the combination of a keyword and the page number is the NIX, while the word on the actual page somewhere in the book is the data

– of course, it works a bit different in the actual relational database systems implementations but this example gives quite close overview of those principles and ideas lying behind the very invention and implementation of CIXes / NIXes

This way, data is naturally sorted in the CIX column by its values.

 


create table test_cix(a int primary key) --implicit cix creation
go
insert into test_cix
select 1
union
select 3
union
select 10
union
select 2
union
select 5
go
select * from test_cix 
go

--note, values in column come out sorted, though there is no 'order by' clause anywhere on contrast, a HEAP table, with no CIX or NIX will have have data naturally ordered

Indexes should be always built on numeric columns, preferably int or bigint types. Creating indexes on non-numeric columns increase their size and can’t serve the index search purpose directly, without some sort of implicit transformation during the execution phase, hence significantly reducing the index performance to the point when full table scans (FTS) is chosen by the Optimizer as more beneficial way for the data retrieval.

Indexes are often called or otherwise referenced as B-Tree structures. This is so because an index can have multiple levels, typically not more than two-three, but could be more as an exception:

1-10 splits into
1-3, 4-6, 7-10, which in turn split into the so called leaf-level index
1,2,3, 4,5,6, 7,8,9,10 – actual data (pages, consisting of 8Kb blocks)
this is the example of two-level index (1-3, 4-6 and 7-10 level 1, the leaf-level is level 2 and the top level is level 0, which doesn’t count)

The more levels an index has, the less efficient it is as, obviously, the Optimizer has to iterate through all the levels to reach the actual data on the leaf-level

Schematically it can be presented as

***
*** *** ***
*** *** *** ***
*********** ********* **********

which reminds a tree, hence B-Tree, where B stands for binary – Indexes are Binary Tree (data) structures

CIXes are typically created on PK column, although, technically, PK is a unique constraint, not CIX by itself, unless or until CIX is created on it. Another words CIX is always unique but
may or may not be created on PK column, while PK column would also accept only unique values in the column(s) it’s build on, but is not an index by itself, unless CIX is explicitly built on it.
And this is a very important designation between the two often overlooked at database design.

There are cases when it is actually more beneficial to have CIX built on column(s) other than PK and PK column(s) would have a NIX index created, but that is rather an exception. In the total majority of cases and as a rule of thumb, consider creating CIX on PK column which is the unique record identifier, made on int or bigint column, usually with IDENTITY(1,1) constraint.

As for the NIXes, their typical practical application would be FK columns or, otherwise, columns participating in JOIN operation. This, however, requires a careful analysis of the execution plan and a solid practical understanding of Optimizer producing the Execution Plan, its components and the difference between them in terms of efficiency for the data retrieval. This is out of scope of this material.

Here are few general measures and rules of thumb, which would allow to cover the majority of cases for performance optimization unless they don’t work, but that would only mean the advance performance analysis and optimization is required which definitely should be left to DBA.

– create CIX on unique record identifier column, very much preferably of numeric data type, among which very much preferably on int or bigint in case there is suspicion that int’s plus-minus 32+ billion values might not cover the potential values of that column. Also, try not to include multiple columns, i.e. create composite index. This is a poor practice and most probably won’t add to the performance but actually might easily become a burden on it
– create NIX on FK columns or columns participating in JOINs. Again, unless absolutely required, which means after carefully examining the Execution Plan or running the Index Analysis Adviser (part of Database Engine Tuning Adviser) clearly suggesting it, do not create NIX on FK/JOIN columns if their data types are non-numeric

Another important index functionality effectiveness side is the concept of composite vs covering indexes, which are those and when each type should be used.

Composite index is either CIX or NIX build on more than one column. Covering index is NIX index, build on one or more columns (another words, a covering index may also be the composite by nature), INCLUDING the other certain necessary for better performance columns.

While both CIX and NIX can be composite, i.e. be built on multiple columns, only NIX can be covering, naturally, as CIX is a COMPREHENSIVE covering by its very definition – it’s built on certain column(s) while all other columns in a table are INCLUDED, thus “covered”

Basically, as a rule of thumb, composite indexes should be NIX and created in the cases when multiple columns of two or more tables are participating in the join(s)

select a.column_1, b.column_2
from a inner join b on a.x1 = b.y1 and a.x2 = b.y2 and a.x3 = b.y3

–composite index build on three columns participating in JOIN between tables a and b
create index nix_a_x1_x2_x3 on a(x1,x2,x3)
create index nix_b_y1_y2_y3 on b(y1,y2,y3)

You might want to get used to the proper, one of the common standard index naming conventions, where nix_ is prefix followed by a table’s name, then all participating columns are listed in the order of appearance, separated with the underscore

NIX_[COLUMN A]_ COLUMN B]_ COLUMN C…]

Covering indexes are to be created in cases when multiple columns are being selected from tables participating in joins. It sounds like something making very little sense – isn’t it always like this, i.e. tables are joined and columns from those tables are used in the select, so shall the covering index be created on them every time? Of course not. That’s exactly where the game of the frequency kicks in.

If a certain “heavy” select is being executed multiple and multiple times, with poor performance as the Optimizer probably heads for RID (Row ID) Lookup operation which is exactly that performance bottle-neck, such covering index might solve this problem. However, it would work only for this particular columns selection, i.e. shall another column be added to the select, the index becomes useless until updated (dropped and re-created) with that additional column.select a.column_1, a.column_2, a.column_3, b.column_11, b.column_22, b.column_33
from a inner join b on a.x1 = b.y1–covering indexes built on columns participating in the JOIN between tables a and b, INCLUDING columns in the SELECT clause

create nix_a_x1_incl_column_1_column_2_column_3 on a(x1) INCLUDING (column_1, column_2, column_3)
create nix_b_y1_incl_column_11_column_22_column_33 on a(y1) INCLUDING (column_11, column_22, column_33)

You can think of such covering index as sort of analog of CIX for this particular situation. Indeed, it has the key column (a.x1 and b.y1) and all the “data” columns – 1, 2, 3 in a and 11, 22, 33 in b tables, which do closely resemble the nature and structure of CIX, with the difference that this NIX covering index columns values might be non-unique and that is fine.


--Syntax examples

--implicit CIX creation on PK column
if exists (select 1 from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id and s.name = 'dbo' and t.name = 'Table_1') drop table Table_1
go
create table Table_1 (Column_1 int PRIMARY KEY CLUSTERED, Column_2 varchar(10), Column_3 money)
go

--explicit CIX creation on PK column
if exists (select 1 from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id and s.name = 'dbo' and t.name = 'Table_2') drop table Table_2
go
create table Table_2 (Column_1 int not null, Column_2 varchar(10), Column_3 money)
go
alter table Table_2 add constraint PK_Table_2_Column_1 PRIMARY KEY CLUSTERED(Column_1)
go

--The difference b/w these two ways of creating table with CIX on PK column is the CIX index name, which would be generic on the first case (something like PK__Table_1__1AA08F1CE994C9DF) 
--and the explicitely specified in the second, which is PK_Table_2_Column_1

--NIX on PK and CIX on non-PK column (for exceptional cases)
if exists (select 1 from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id and s.name = 'dbo' and t.name = 'Table_3') drop table Table_3
go
create table Table_3 (Column_1 int not null, Column_2 varchar(10) not null, Column_3 money) --note, column which is intended the CIX to be build on, has to be non-nullable, hence Column_2 is defined has 'not null' definition
go
create clustered index CIX_Table_3_Column_2 on Table_3(Column_2)
go
alter table Table_3 add constraint PK_Table_3_Column_1 PRIMARY KEY (Column_1)
go

--Non-unique and unique NIX creation
if exists (select 1 from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id and s.name = 'dbo' and t.name = 'Table_4') drop table Table_4
go
create table Table_4 (Column_1 int, Column_2 varchar(10), Column_3 money)
go
create nonclustered index nix_Table_4_Column_1 on Table_4(Column_1) --this creates non-unique NIX as there is simply no designated keyword for non-unique index
go
create unique nonclustered index nix_Table_4_Column_2 on Table_4(Column_2) --this creates unique non-unique NIX
go

--cleaning up
if exists (select 1 from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id and s.name = 'dbo' and t.name = 'test_cix') drop table test_cix
go
if exists (select 1 from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id and s.name = 'dbo' and t.name = 'Table_1') drop table Table_1
go
if exists (select 1 from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id and s.name = 'dbo' and t.name = 'Table_2') drop table Table_2
go
if exists (select 1 from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id and s.name = 'dbo' and t.name = 'Table_3') drop table Table_3
go
if exists (select 1 from sys.tables t inner join sys.schemas s on t.schema_id = s.schema_id and s.name = 'dbo' and t.name = 'Table_4') drop table Table_4
go
<span data-mce-type="bookmark" style="display: inline-block; width: 0px; overflow: hidden; line-height: 0;" class="mce_SELRES_start"></span>
Advertisements

Quick and easy way to automate SQL Express database backups

August 6, 2015 Leave a comment

One of free SQL Server Express limitations is the lack of SQL Server Agent and, consequently, Maintenance Plans as such.

The easy way to automate regular database tasks, such as database backup would be to use internal windows scheduler and just a simple “ms dos”-like batch file.

First, the backup stored procedure has to be created in [master] database.


CREATE PROCEDURE Sp_dbbackup_user_and_system_databases

AS

BEGIN

DECLARE @db_bkp_location NVARCHAR(50)

DECLARE @today NVARCHAR(10)

DECLARE @bkpsql NVARCHAR(500)

DECLARE @db_name NVARCHAR(50)

DECLARE bkpcur CURSOR FOR

SELECT NAME

FROM   sys.databases

WHERE  NAME NOT IN ( ‘tempdb’ )

ORDER  BY NAME

SET @db_bkp_location = ‘E:\DBBackup’

SET @today = CONVERT(VARCHAR, Getdate(), 112)

SET @today = Substring(@today, 1, 4) + ‘_’

+ Substring(@today, 5, 2) + ‘_’

+ Substring(@today, 7, 2)

SET @bkpsql = 

SET @db_name = 

OPEN bkpcur

FETCH bkpcur INTO @db_name

WHILE @@fetch_status = 0

BEGIN

SET @bkpsql = ‘BACKUP DATABASE ‘ + @db_name + ‘ TO  DISK = ‘

+ ”” + @db_bkp_location + ‘\’ + @db_name + ‘_’ +

@today

+ ‘.bak’ + ””

+ ‘ WITH NOFORMAT, INIT,  NAME = ‘ + ””

+ @db_name + ‘-‘ + @today + ‘-Full Database Backup’

+ ”” + ‘, SKIP, NOREWIND, NOUNLOAD’;

–print @bkpsql

EXECUTE Sp_executesql

@bkpsql

FETCH bkpcur INTO @db_name

END

CLOSE bkpcur

DEALLOCATE bkpcur

END

Now the batch file

@echo off

sqlcmd -S .\SQLEXPRESS -E -Q “EXEC master..sp_DBBackup_User_and_System_Databases”

Now what’s left is just to schedule it with standard windows scheduler.

Start -> All Programs -> Accessories -> System Tools -> Scheduled Tasks -> Add Scheduled Tasks, plug in the batch file name, pick up the recurrence you want and it’s good to go. Like it was one of those Maintenance Plan tasks or just plain SQL Agent jobs we’re all used to.

Drop / Recreate FKs

November 7, 2014 Leave a comment

Original at: http://www.codeproject.com/Tips/740561/SQL-Script-to-Drop-and-Re-create-All-Foreign-Keys

;WITH RefColumns AS

    (
SELECT
C.referenced_object_id AS [object_id],
C.parent_object_id,
STUFF((
SELECT ', ' + QUOTENAME(B.name)
FROM sys.foreign_key_columns A
JOIN sys.columns B ON B.[object_id] = A.referenced_object_id
AND B.column_id = A.referenced_column_id
WHERE C.parent_object_id = A.parent_object_id
AND C.referenced_object_id = A.referenced_object_id
FOR XML PATH('')), 1, 2, '') AS ColumnNames
FROM sys.foreign_key_columns C
GROUP BY C.referenced_object_id, C.parent_object_id
)
,
ParentColumns AS
(
SELECT
C.parent_object_id AS [object_id],
C.referenced_object_id,
STUFF((
SELECT ', ' + QUOTENAME(B.name)
FROM sys.foreign_key_columns A
JOIN sys.columns B ON B.[object_id] = A.parent_object_id
AND B.column_id = A.parent_column_id
WHERE C.parent_object_id = A.parent_object_id
AND C.referenced_object_id = A.referenced_object_id
FOR XML PATH('')), 1, 2, '') AS ColumnNames
FROM sys.foreign_key_columns C
GROUP BY C.parent_object_id, C.referenced_object_id
)
SELECT
'ALTER TABLE ' + QUOTENAME(SCHEMA_NAME(PT.[schema_id])) + '.' +
QUOTENAME(PT.name) + ' DROP  CONSTRAINT' + ' ' + QUOTENAME(FK.name)                                                        AS [DropFKScript],
'ALTER TABLE ' + QUOTENAME(SCHEMA_NAME(PT.[schema_id])) + '.' +
QUOTENAME(PT.name) + ' WITH CHECK ADD  CONSTRAINT '+ QUOTENAME(FK.name) + CHAR(13) + CHAR(10) +
'FOREIGN KEY(' + PC.ColumnNames + ')' + CHAR(13) + CHAR(10) +
'REFERENCES ' + QUOTENAME(SCHEMA_NAME(RT.[schema_id])) + '.' +
QUOTENAME(RT.name) + ' (' + RC.ColumnNames + ')' + CHAR(13) +
CHAR(10) + 'GO' + CHAR(13) + CHAR(10) +
'ALTER TABLE ' + QUOTENAME(SCHEMA_NAME(PT.[schema_id])) + '.' +
QUOTENAME(PT.name) + ' CHECK CONSTRAINT ' + QUOTENAME(FK.name) +
CHAR(13) + CHAR(10) + 'GO' + CHAR(13) + CHAR(10)                                                                            AS [CreateFKScript]
FROM sys.foreign_keys FK
JOIN sys.tables PT ON PT.[object_id] = FK.parent_object_id
JOIN ParentColumns AS PC ON PC.[object_id] = FK.parent_object_id
AND PC.referenced_object_id = FK.referenced_object_id
JOIN sys.tables RT ON RT.[object_id] = FK.referenced_object_id
JOIN RefColumns AS RC ON RC.[object_id] = FK.referenced_object_id
AND RC.parent_object_id = FK.parent_object_id
WHERE PT.name NOT IN ('dtproperties', 'sysdiagrams', '__RefactorLog')
AND
RT.name NOT IN ('dtproperties', 'sysdiagrams', '__RefactorLog')
--and (RT.name like 'Award%' or RT.name like 'MasterBreeder%' or RT.name = 'AverageRegistrationGroup')
AND RT.name LIKE 'Customer%'
ORDER BY PT.name

Damm Algorithm T-SQL Implementation

November 7, 2014 1 comment

Recently I ended up having to actually write my own function returning the Damm algorithm check digit in T-SQL. I was sure I would find some Damm algorithm T-SQL code snippet on the net, but no, – algo implementation code examples can be found in literally any main stream languages but T-SQL. Here comes its T-SQL version now.

Note 1:

I used input/output Old/New Number as a string to make it work with alpha-numerics of a certain format, – hence I have some padding and prefixing and postfixing of the New Number as well as fixed Old Number format, which is, again, a varchar. This can easily be changed to numerics only, with no prefixes, postfixes or any of that stuff, if needed – simply take it off and change the input/output data types

Note 2:

I used this wiki for the reference of how this algo works: http://en.wikipedia.org/wiki/Damm_algorithm

CREATE FUNCTION dbo.CheckDigitDammAlgo (@vOldNumber NVARCHAR(6))
RETURNS NVARCHAR(15)
AS
BEGIN
DECLARE
@cPaddingLength tinyint = 6,    --constant padding length
@cPadding NCHAR(6) = '000000',    --padding constrant
@cPrefix  NCHAR(3) = 'ZX_',        --prefix constrant
@cPostfix NCHAR(4) = '_712',        --postfix constrant
@vNewNumber NVARCHAR(20);        --New Number (output)
DECLARE @vDigit tinyint;                --each of six Old Number digits
DECLARE @vDigitIndex tinyint = 1;        --Old Number digit position (1 through 6)
DECLARE @vFlagDone bit = 0;                --the outer while-loop ending flag
DECLARE  @vOldInterim tinyint
,@vNewInterim tinyint
,@vColumnIndex NVARCHAR(10)
,
@vRowIndex tinyint
,@vTableEntry tinyint;
DECLARE @vArray TABLE (
interim tinyint
,zero tinyint
,one tinyint
,two tinyint
,three tinyint
,four tinyint
,five tinyint
,six tinyint
,seven tinyint
,eight tinyint
,nine tinyint
);
INSERT INTO @vArray (interim, zero, one, two, three, four, five, six, seven, eight, nine)
VALUES (0, 0, 3, 1, 7, 5, 9, 8, 6, 4, 2);
INSERT INTO @vArray (interim, zero, one, two, three, four, five, six, seven, eight, nine)
VALUES (1, 7, 0, 9, 2, 1, 5, 4, 8, 6, 3);
INSERT INTO @vArray (interim, zero, one, two, three, four, five, six, seven, eight, nine)
VALUES (2, 4, 2, 0, 6, 8, 7, 1, 3, 5, 9);
INSERT INTO @vArray (interim, zero, one, two, three, four, five, six, seven, eight, nine)
VALUES (3, 1, 7, 5, 0, 9, 8, 3, 4, 2, 6);
INSERT INTO @vArray (interim, zero, one, two, three, four, five, six, seven, eight, nine)
VALUES (4, 6, 1, 2, 3, 0, 4, 5, 9, 7, 8);
INSERT INTO @vArray (interim, zero, one, two, three, four, five, six, seven, eight, nine)
VALUES (5, 3, 6, 7, 4, 2, 0, 9, 5, 8, 1);
INSERT INTO @vArray (interim, zero, one, two, three, four, five, six, seven, eight, nine)
VALUES (6, 5, 8, 6, 9, 7, 2, 0, 1, 3, 4);
INSERT INTO @vArray (interim, zero, one, two, three, four, five, six, seven, eight, nine)
VALUES (7, 8, 9, 4, 5, 3, 6, 2, 0, 1, 7);
INSERT INTO @vArray (interim, zero, one, two, three, four, five, six, seven, eight, nine)
VALUES (8, 9, 4, 3, 8, 6, 1, 7, 2, 0, 5);
INSERT INTO @vArray (interim, zero, one, two, three, four, five, six, seven, eight, nine)
VALUES (9, 2, 5, 8, 1, 4, 3, 6, 7, 9, 0);
IF ISNUMERIC(@vOldNumber) = 0
BEGIN
RETURN
NULL;
END

--pad @vOldNumber with zeroes to six characters
SET @vOldNumber = RIGHT(@cPadding + RTRIM(@vOldNumber), @cPaddingLength)
WHILE @vFlagDone = 0
BEGIN
SET
@vOldInterim = 0 --first OldInterim is always 0 for any digit
WHILE @vDigitIndex <= @cPaddingLength
BEGIN
SET
@vDigit = CONVERT(tinyint, SUBSTRING(@vOldNumber, @vDigitIndex, 1));
SET @vColumnIndex = CASE @vDigit
WHEN 0 THEN 'zero'
WHEN 1 THEN 'one'
WHEN 2 THEN 'two'
WHEN 3 THEN 'three'
WHEN 4 THEN 'four'
WHEN 5 THEN 'five'
WHEN 6 THEN 'six'
WHEN 7 THEN 'seven'
WHEN 8 THEN 'eight'
WHEN 9 THEN 'nine'
END;
SET @vRowIndex = @vOldInterim;
SELECT  @vTableEntry = CASE @vColumnIndex
WHEN 'zero' THEN zero
WHEN 'one' THEN one
WHEN 'two' THEN two
WHEN 'three' THEN three
WHEN 'four' THEN four
WHEN 'five' THEN five
WHEN 'six' THEN six
WHEN 'seven' THEN seven
WHEN 'eight' THEN eight
WHEN 'nine' THEN nine
END
FROM    
@vArray
WHERE   interim = @vRowIndex;
SET @vOldInterim = @vTableEntry;
SET @vDigitIndex = @vDigitIndex + 1
END
SET
@vFlagDone = 1
END
SET
@vNewNumber = @cPrefix + @vOldNumber + CONVERT(VARCHAR, @vTableEntry) + @cPostfix;
RETURN @vNewNumber
END

SELECT dbo.CheckDigitDammAlgo ('1')
SELECT dbo.CheckDigitDammAlgo ('2')
SELECT dbo.CheckDigitDammAlgo ('3')
SELECT dbo.CheckDigitDammAlgo ('4')
SELECT dbo.CheckDigitDammAlgo ('5')
SELECT dbo.CheckDigitDammAlgo ('6')
SELECT dbo.CheckDigitDammAlgo ('7')
SELECT dbo.CheckDigitDammAlgo ('8')
SELECT dbo.CheckDigitDammAlgo ('9')
SELECT dbo.CheckDigitDammAlgo ('10')

Viewing index fragmentation

November 7, 2013 Leave a comment

All tables in a given database

SELECT DB_NAME(database_id) AS 'databasename',
	OBJECT_NAME([OBJECT_ID]) AS 'tablename',
	index_type_desc AS index_type_desc,
	avg_fragmentation_in_percent AS '% fragmentation',
	fragment_count '# fragments',
	page_count,
	avg_fragment_size_in_pages 'Avg frag size in pages'
FROM   sys.dm_db_index_physical_stats (DB_ID('[database_name]'),null,null,null,null );

Single given table in a database

SELECT DB_NAME(database_id) AS 'databasename',
	OBJECT_NAME([OBJECT_ID]) AS 'tablename',
	index_type_desc AS index_type_desc,
	avg_fragmentation_in_percent AS '% fragmentation',
	fragment_count '# fragments',
	page_count,
	avg_fragment_size_in_pages 'Avg frag size in pages'
FROM   sys.dm_db_index_physical_stats (DB_ID('[database_name]'),OBJECT_ID('[table_name]'),null,null,'DETAILED' );

I/O pressure

November 7, 2013 Leave a comment
-- Read
  SELECT --TOP 20 
	qt.text AS 'SP Name', qs.total_physical_reads, qs.total_physical_reads/qs.execution_count AS 'Avg Physical Reads',
    qs.execution_count AS 'Execution Count',
    qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second',  
    qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',
    qs.total_worker_time AS 'TotalWorkerTime',
    qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime',
    qs.max_logical_reads, qs.max_logical_writes,  
    DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache', qt.dbid 
    FROM sys.dm_exec_query_stats AS qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
    --WHERE qt.dbid = db_id() -- Filter by current database
    ORDER BY qs.total_physical_reads DESC

-- Write
   SELECT TOP 20 qt.text AS 'SP Name', qs.total_logical_writes, qs.total_logical_writes/qs.execution_count AS 'AvgLogicalWrites',
    qs.total_logical_writes/DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Logical Writes/Min',  
    qs.execution_count AS 'Execution Count', 
    qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second', 
    qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime',
    qs.total_worker_time AS 'TotalWorkerTime',
    qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime',
    qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads, 
    DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache',
    qs.total_physical_reads/qs.execution_count AS 'Avg Physical Reads', qt.dbid
    FROM sys.dm_exec_query_stats AS qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
    --WHERE qt.dbid = db_id() -- Filter by current database
    ORDER BY qs.total_logical_writes DESC

 

CPU Pressure

November 7, 2013 Leave a comment
SELECT TOP 50 
      SUM(qs.total_worker_time) AS total_cpu_time, 
      SUM(qs.execution_count) AS total_execution_count,
      COUNT(*) AS  number_of_statements, 
      qs.sql_handle 
FROM sys.dm_exec_query_stats AS qs
GROUP BY qs.sql_handle
ORDER BY SUM(qs.total_worker_time) DESC
SELECT 
      total_cpu_time, 
      total_execution_count,
      number_of_statements,
      s2.text
FROM 
      (SELECT TOP 50 
            SUM(qs.total_worker_time) AS total_cpu_time, 
            SUM(qs.execution_count) AS total_execution_count,
            COUNT(*) AS  number_of_statements, 
            qs.sql_handle
      FROM 
            sys.dm_exec_query_stats AS qs
      GROUP BY qs.sql_handle
      ORDER BY SUM(qs.total_worker_time) DESC) AS stats
      CROSS APPLY sys.dm_exec_sql_text(stats.sql_handle) AS s2
SELECT TOP 50
total_worker_time/execution_count AS [Avg CPU Time],
(SELECT SUBSTRING(text,statement_start_offset/2,(CASE WHEN statement_end_offset = -1 then LEN(CONVERT(nvarchar(max), text)) * 2 ELSE statement_end_offset end -statement_start_offset)/2) FROM sys.dm_exec_sql_text(sql_handle)) AS query_text, *
FROM sys.dm_exec_query_stats 
ORDER BY [Avg CPU Time] DESC