DECLARE @DB_Name varchar(100) DECLARE @Command nvarchar(2000) DECLARE database_cursor CURSOR FOR SELECT name FROM sys.databases where database_id>4 and name not like '%master%' OPEN database_cursor FETCH NEXT FROM database_cursor INTO @DB_Name WHILE @@FETCH_STATUS = 0 BEGIN SELECT @Command =' use '+ @DB_Name+'; declare @query varchar(1000) declare @executequery cursor set @executequery=cursor for select '' sp_change_users_login ''+CHAR(39)+''update_one''+CHAR(39)+'',''+CHAR(39)+name+CHAR(39)+'',''+CHAR(39)+name+CHAR(39) from sysusers where issqluser = 1 and (sid is not null and sid <> 0x0) AND SUSER_SNAME(sid) IS NULL open @executequery fetch next from @executequery into @query while @@fetch_status=0 begin exec (@query) print (@query) fetch next from @executequery into @query end close @executequery; deallocate @executequery; go' print @Command FETCH NEXT FROM database_cursor INTO @DB_Name END CLOSE database_cursor DEALLOCATE database_cursor
--Script to failover all databases in a instance. declare @mirroring table (query varchar(200)) insert into @mirroring select 'use master;' insert into @mirroring SELECT ' ALTER DATABASE '+quotename(db_name(database_id))+' SET PARTNER FAILOVER ;' FROM sys.database_mirroring WHERE mirroring_role_desc = 'PRINCIPAL' select * from @mirroring -- Script to Remove Database Mirroring for all databases after failover (useful in cut-over) declare @mirroring table (query varchar(200)) insert into @mirroring select 'use master;' insert into @mirroring SELECT ' ALTER DATABASE '+quotename(db_name(database_id))+' SET PARTNER OFF ;' FROM sys.database_mirroring WHERE mirroring_role_desc = 'PRINCIPAL' select * from @mirroring
It is a good practice to find and monitor your large SQL server tables, we may surprise in some cases where our large audit or logging tables that are not being cleaned up frequently and fill into multiple GB’s of data and may result in higher backup and restore times. Working with your development team on these large tables to see if they are serving any business else can be archived and reduces huge maintenance over head with backup/restores and also saves disk space available.
Here is the script that finds top 10 large tables that are ordered by reserved disk space.
--CREATING STAGING TABLES S CREATE TABLE MASTER.DBO.SPT_SPACE ( DBNAME VARCHAR(50) NOT NULL, OBJID VARCHAR(300) NULL, ROWS INT NULL, RESERVED DEC(15) NULL, DATA DEC(15) NULL, INDEXP DEC(15) NULL, UNUSED DEC(15) NULL ) CREATE TABLE [DBO].[TOP10_LARGE]( [DBNAME] [VARCHAR](50) NOT NULL, [TABLE_NAME] [SYSNAME] NULL, [ROWS] [CHAR](11) NULL, [RESERVED_KB] [VARCHAR](18) NULL, [DATA_KB] [VARCHAR](18) NULL, [INDEX_SIZE_KB] [VARCHAR](18) NULL, [UNUSED_KB] [VARCHAR](18) NULL ) ON [PRIMARY] EXEC SP_MSFOREACHDB 'USE ? DECLARE @ID INT DECLARE @TYPE CHARACTER(2) DECLARE @PAGES INT DECLARE @DBNAME SYSNAME DECLARE @DBSIZE DEC(15,0) DECLARE @BYTESPERPAGE DEC(15,0) DECLARE @PAGESPERMB DEC(15,0) SET NOCOUNT ON -- CREATE A CURSOR TO LOOP THROUGH THE USER TABLES DECLARE C_TABLES CURSOR FOR SELECT ID FROM SYSOBJECTS WHERE XTYPE = ''U'' OPEN C_TABLES FETCH NEXT FROM C_TABLES INTO @ID WHILE @@FETCH_STATUS = 0 BEGIN /* CODE FROM SP_SPACEUSED */ INSERT INTO MASTER.DBO.SPT_SPACE (DBNAME, OBJID, RESERVED) SELECT DB_NAME(), OBJID = @ID, SUM(RESERVED) FROM SYSINDEXES WHERE INDID IN (0, 1, 255) AND ID = @ID SELECT @PAGES = SUM(DPAGES) FROM SYSINDEXES WHERE INDID < 2 AND ID = @ID SELECT @PAGES = @PAGES + ISNULL(SUM(USED), 0) FROM SYSINDEXES WHERE INDID = 255 AND ID = @ID UPDATE MASTER.DBO.SPT_SPACE SET DATA = @PAGES WHERE OBJID = @ID /* INDEX: SUM(USED) WHERE INDID IN (0, 1, 255) - DATA */ UPDATE MASTER.DBO.SPT_SPACE SET INDEXP = (SELECT SUM(USED) FROM SYSINDEXES WHERE INDID IN (0, 1, 255) AND ID = @ID) - DATA WHERE OBJID = @ID /* UNUSED: SUM(RESERVED) - SUM(USED) WHERE INDID IN (0, 1, 255) */ UPDATE MASTER.DBO.SPT_SPACE SET UNUSED = RESERVED - (SELECT SUM(USED) FROM SYSINDEXES WHERE INDID IN (0, 1, 255) AND ID = @ID) WHERE OBJID = @ID UPDATE MASTER.DBO.SPT_SPACE SET ROWS = I.ROWS FROM SYSINDEXES I WHERE I.INDID < 2 AND I.ID = @ID AND OBJID = @ID FETCH NEXT FROM C_TABLES INTO @ID END CLOSE C_TABLES DEALLOCATE C_TABLES' EXEC SP_MSFOREACHDB' USE ? INSERT INTO MASTER.DBO.TOP10_LARGE SELECT TOP 10 DBNAME, TABLE_NAME = (SELECT NAME FROM SYS.SYSOBJECTS WHERE ID = OBJID), ROWS = CONVERT(CHAR(11), ROWS), RESERVED_KB = LTRIM(STR(RESERVED * D.LOW / 1024.,15,0) ), DATA_KB = LTRIM(STR(DATA * D.LOW / 1024.,15,0) ), INDEX_SIZE_KB = LTRIM(STR(INDEXP * D.LOW / 1024.,15,0)), UNUSED_KB = LTRIM(STR(UNUSED * D.LOW / 1024.,15,0) ) FROM MASTER.DBO.SPT_SPACE A, MASTER.DBO.SPT_VALUES D WHERE D.NUMBER = 1 AND D.TYPE = ''E'' AND DBNAME = ''?''ORDER BY RESERVED DESC' GO SELECT * FROM MASTER.DBO.TOP10_LARGE WHERE DBNAME NOT IN ('MASTER', 'TEMPDB','MSDB', 'MODEL') --CLEANING UP ALL TABLES. -- DROP TABLE MASTER.DBO.TOP10_LARGE -- DROP TABLE MASTER.DBO.SPT_SPACE
What is SQL server Statistics?
SQL Server statistics will be used by optimizer to create the optimized execution plans where estimating the number of rows that can be returned, density of pages and statistics object will also hold a histogram of information about the distinctive number of rows and range of typical rows. All this information will be used by optimizer to estimate optimal execution plan to retrieve the data.
When do these statistics will be created?
SQL server will create statistics object when we create an index on table and statistics object will also be created by SQL server automatically when we use non-indexed column in a where condition of select queries (What this mean is we are missing an index there). Also we could just create the statistics manually.
Let’s see this with an example:
I have created a database and a table called dbo.employee (I copied it over from Adventurewroks2012 )which does not have any index after copied to a new database called statistics as below
When I run a simple select on this table like below pic (2) will result into a table scan in execution plan as it doesn’t have an index and will still create statistics pic (3 )to make use of them when the same query run every time.
Let’s create index pic (4) to get benefited as all we know seek is better than scan when we select for specific rows instead a whole set of rows.
When I created the index, now SQL server will create the statistics specifically for this index and these statistics will tell the optimizer how to use this index and get the data faster and optimal.
How these statistics will help Optimizer?
Let’s take the same above Simple select to see the estimated execution plan. Interesting now, the estimated execution plan is showing that the plan was just created by using the statistics object that was creates along with the index and displays the estimated rows that will returned and other information which estimated and this information was read by Optimizer from the statistics.
I’m not going into detail how the query was executed internally when the SQL hits the SQL engine to process the query but in general the optimizer which will be in relation engine will use this statistics to create the estimated plan and will handed over the plan to storage engine to get the data where the actual execution plan comes into picture.
As long as the estimated and actual plans are same, there will no performance issue as this means that optimizer had the updated statistics. When these statistics got not updated may result in actual plans different from estimated which result in not much accurate plans from optimizer and let the performance down.
When do these statistics get out of date?
Usually, statistics will be out of date or inaccurate when data in a table changes from time to time. By default statics for table will get updated when
- When an empty table gets a row
- A table had less than 500 and increased by 500 rows
- A table had less than 500 and increased by 500 rows +every 20 % of the total rows
- Trace flag 2371 which will change the fixed rate of the 20% threshold for update statistics into a dynamic percentage rate. The higher the number of rows in a table, the lower the threshold will become to trigger an update of the statistics. For example, if the trace flag is activated it means an update statistics will be triggered on a table with 1 billion rows when 1 million changes occur.
More info on this Trace flag can be found here TraceFlag2371 (I haven’t tested it though)
The best way to check the statistics are out of date is to verify the estimated number of rows in an execution plan to actual number of rows, if they both are almost same then we have the accurate stats and if not then time to update stats.
How to automate updating these statistics?
The database options to create and update statistics will let you do it automatically by SQL server or Optimizer. But how smart are they on a highly transactional databases and large TB databases with millions of data loads every minute to minute?
By default these are enabled.
How to perform manual updating these stats?
1. Below stored procedure will update the stats within whole single database
EXEC sp_updatestats; Or EXEC sp_updatestats ‘resample’
– Resample will use the most recent sample number rows to update the stats.
– sp_updatestats updates only the statistics that require updating based on the rowmodctr (row modification counter)information in the sys.sysindexes
2. Below update command will update the all stats for specific table or specific index if specified. Also this command will provided with multiple options. I will explain few of them which are important to note while doing this update on large TB of data.
UPDATE STATISTICS Table Name or View Name , index_name with Options
FULLSCAN: Will scan the entire number of rows in a table to update the stats.
SAMPLE NUMBER: If you are not like to do whole scan of the table rows (Time consuming) and update the stats, we have option to specify the sample number of rows or percent of rows to scan to update or create the stats.
NORECOMPUTE: If this option is specified then query optimizer completes this statistics update and disables future updates (AUTO_UPDATE_Statistics). We must carefull while using this option as it will turn off the auto stats for the specified table.
What happens to these stats when re-indexing or Re-organizing indexes?
When re-indexing the stats will also be re-created and there is nothing changed to stats when re-organizing the indexes
How to check when your stats were last updated?
1. One way to query sys.stats system table, something like below
select object_name(object_id) TableName ,name ,stats_date(object_id, stats_id) Last_updated from sys.stats where objectproperty(object_id, 'IsUserTable') = 1
2. the other way is
DBCC SHOW_STATISTICS (TableName, statsName)
I was not able to find the sample databases for SQL server 2012 as most of the sites I have visited have only 2000 sample databases which we will be not able to restore to 2012. Here I have uploaded the link 2012 sample .bak files. (Easy to restore to 2012)
Pubs, Adventure Works, North Wind –Download
For Example, In log shipping the secondary or a stand-by database will be in restoring mode while applying the logs from primary for every 15 minutes.
In order to perform a DR test, we need to bring the production down and DR database online, this can be achieved by running the below query on secondary server database.
–Run this below query to bring the database online which is in Restoring state after successfully restored all log backups using the restore jobs in secondary server
Restore Log [DATABASE NAME] with recovery
Configuring dBMail involves 3 main steps
Step 1: Creating Mail Profile
Step2: Creating Mail Account
Step3: Mapping Account to Profile.
Below script will let you do this 3 steps and configures your dBmail successfully.
- ENABLE SQL DBMAIL, if diabled EXEC sys.sp_configure N'Database Mail XPs', N'1' GO RECONFIGURE GO - Add Mail Profile EXEC msdb.dbo.sysmail_add_profile_sp @profile_name=N'Profile Name' GO - Set as Default Profile EXEC msdb.dbo.sysmail_add_principalprofile_sp @profile_name=N'Profile Name', @is_default =N'1' GO - ADD MAIL ACCOUNT EXEC msdb.dbo.sysmail_add_account_sp @account_name = 'Account_Name', @email_address = 'Email Address,EX:DBA@yourcompany.com', @display_name = 'Account Name', @replyto_address = 'Email Address,EX:DBA@yourcompany.com', @mailserver_name = 'your SMTP Server', @mailserver_type = N'SMTP or if you use other mail protocol', @port = 25, @use_default_credentials = 0, @enable_ssl = 0 GO - Mapping Account with Profile EXEC msdb.dbo.sysmail_add_profileaccount_sp @profile_name=N'Profile Name', @account_name= N'Account Name', @sequence_number=N'1' GO