Monday, May 11, 2020

Advanced Scanning : Merry Go Round;

#Advanced Scanning

Today I read a very interesting feature of SQL Enterprise. I must say it very intelligently named after the Merry go round (a continuous cycle of activities or events, especially when regarded as pointless.)
So this is basically an interesting feature of SQL enterprise that efficiently combined the multiple scan requests and manages the optimum use of the resources.

When there is a request from user to get the details that involve the scanning of the data pages and while the request and scan is in progress 2nd user initiates the request for the same data.

Point to note here:

1. If we keep 2nd user in a queue the user experiences a delay in process.
2. If we accept point 1 this implies: we are going to scan the same data pages again causing more Disk contention and damage: due to frequent movement of disk hand causing wear and tear.
3. Consuming CPU resources unnecessarily.
4. It will simply be inefficient.
5. It will also cause fight for buffer space. 


To avoid this there is a feature of advanced scanning; this feature solves all our problems and makes the system more efficient.

Explanation:

Refer Figure A: suppose we have a table that has odd 10,000 data pages. User 1 initiates a query and SQL server starts scanning now it has reached 2,500 page & User 2 initiate a query and this query requires the same data pages to be scanned so SQL server will join the scan for these 2 users. From 2,501 pages the scan results are shared between users. Not scanning reached 4,000 pages and User 3 requested the same data pages so again SQL server will scan data pages and share the results between User 1, 2 & 3. Not when the End of the scan is reached i.e. 10,000th (END) page is scanned, User1 will get the results and is out of the motion. User 2 & 3 scanning will bounce back to page 1 (start) and continue. When scanning reaches the 2,500th Page the User 2 is out or motion. But User 3 will continue till 4,000th page.

This scanning continues till all the scan requests are fulfilled. This is called ‘’Merry Go round ‘’.

Figure A

Without advanced scanning, each user would have to compete for buffer space and cause disk arm contention. The same pages would then be read once for each user, instead of read one time and shared by multiple users, slowing down performance and taxing resources.

Reference: MSDN

My LinkedIn;

Thursday, April 16, 2020

Certification; My 70-462 Score card



 This is just kind of reminder and make it immediately available. In last few days i gave few interviews and was never asked and I never took stress in looking for the score since i passed some how (offcourse it was hard study from which i don't remember a few things of it) so could not tell the interviewer that i scored 800 marks... Hence this is the score card I upload to you all (also for myself....!;)

Note: That aadhar card like pic is mine....I had the same reaction...lol!!



Thursday, April 9, 2020

Factors That Can Delay Log Truncation



 #Factors That Can Delay Log Truncation
#I_Bet*1

We all must have seen log_reuse_wait_desc column from sys.databases catalog. But Log records can remain active under a variety of conditions, which are mentioned below and can delay log truncation...
But I bet you have missed all the values the log_reuse_wait_desc column can have....check it now!!

Description 

0
NOTHING 
Currently there are one or more reusable virtual log files.

1
CHECKPOINT 

No checkpoint has occurred since the last log truncation, or the head of the log has not yet moved beyond a virtual log file (all recovery models).

2
LOG_BACKUP
A log backup is required to move the head of the log forward (full or bulk-logged recovery models only).
When the log backup is completed, the head of the log is moved forward, and some log space might become reusable.
Note Log backups do not prevent truncation.

3
ACTIVE_BACKUP_OR_RESTORE
A data backup works like an active transaction, and, when running, the backup prevents truncation

4
ACTIVE_TRANSACTION
A transaction is active (all recovery models).
A long-running transaction might exist at the start of the log backup. In this case, freeing the space might require another log backup. For more information, see "Long-Running Active Transactions," later in this topic.

5
DATABASE_MIRRORING
Database mirroring is paused, or under high-performance mode, the mirror database is significantly behind the principal database (full recovery model only).

6
REPLICATION
During transactional replications, transactions relevant to the publications are still undelivered to the distribution database (full recovery model only).

7
DATABASE_SNAPSHOT_CREATION
A database snapshot is being created (all recovery models).
This is a routine, and typically brief, cause of delayed log truncation.

8
LOG_SCAN 

A log scan is occurring (all recovery models).
This is a routine, and typically brief, cause of delayed log truncation.

9
OTHER_TRANSIENT 
This value is currently not used.
Source: MSDN Blog

Thursday, March 12, 2020

#Change database Schema to your choice.

#Schema change

#Quick Trick#2


Today I got you another trick to generate schema change commands very very quickly and smartly.

We had a request to restore a database from Dev environment and change the schema on the destination server with some XYZ schema.

Now to generate the schema alteration command for all the tables functions etc. it would take proportional time with the size of database.

Remember Quick trick1 once we used to generate the command to change recovery model of all the databases on instance click here to see more . We used the same logic and generated the commands that I am sharing with you.

--This is the script I used to generate ALTER statements for all objects
-- the schema for each object in cc was changed to cx.
--please note the case for case sensitive servers.
--U=user tables,
-- P=Stored Procedures
--V=VIEW
--'FN', 'FS', 'FT'=Functions

SELECT 'ALTER SCHEMA CX TRANSFER [' + SysSchemas.name + '].[' +DbObjects.Name+ '];', SysSchemas.name,DbObjects.type,DbObjects.type_desc
FROM sys.objectsDbObjects
INNER JOIN sys.schemasSysSchemasONDbObjects.schema_id=SysSchemas.schema_id
WHERE SysSchemas.name = 'CC'
AND (DbObjects.typeIN ('U', 'P', 'V', 'FN', 'FS', 'FT'))
order bytype_desc, name

This will generate the Alter commands and you are good to execute.
For some you may get error but find out in the object type it must be already present, so you need to delete. 

Thankful: To Mahboob Haidari: (Expert SQL DBA L3++ ( more than that may be L5) from Downers grove Illinois,US)  for this knowledge.
NOTE; Do comment and let me know any topic you wish to see and your feedback is always welcome.




Thursday, February 20, 2020

Altering TSQL to get codes where ever possible

Altering TSQL to get codes where ever possible;
#Generate quick commands 
#SQL code Trick

Quick Trick #1;

This is a Quick trick that I recently learned from one of my teammate, that I would like to share with you all.

We must have come across a lot of situation where we need to run multiple commands to get our work done, But we have to repeat the task multiple times to complete the task.

To simplify we will learn how to generate the queries from command itself that gives you upper hand on the task and smart working advantage.

Scenario:

We had to alter all the databases to simple recovery model from full since these were restored from the PROD server all were in full with agreed SLA.
So instead of running and altering the databases command we generated it with a trick.

Command;

select'alter database '+name+' set recovery simple',recovery_model_desc,*from sys.databases where database_id>4

/*-- Database_id>4 is to select only the user databases. --*/


Result :

alter database ReportServer set recovery simple

alter database ReportServerTempDB set recovery simple

Do let me know if this is useful to you. Comment and suggest topics for blog you wish to see.
Share with your DBA friends and help them work smartly.

Knowledge Shared by: Pramod Thikekar: SQL DBA with great 5 yrs of great experience.

Monday, December 16, 2019



Shrink Data File (.MDF) 
--YES its possible;
# LEARNING #New Skill #Task>Challenge

My Recent encounter to a request from client to split the database data file (.Mdf) into 2 datafiles got me this new challenge and a great learning . I would like to share the experience and the technicality behind this theory.
As a DBA we have many times used the command DBCC shrink to get out target results. But there are different parameters that are very useful.
So let’s try to shrink the MDF file. Let’s start the steps.


Step 1: Take the backup of the existing database.(Play it safe..!)

Step 2: Check the disk usage report.  Follow steps as per below snapshot:
Right click the database >reports>Standard Reports>disk usage report.

Note: You need to ignore the free space in the data file. (Every database keeps some free space to perform some maintenance operations for more follow the blog from Brent.)

Step 3: Now add another file in the database:


Step 4: Now check again the Disk usage report


Step 5: Now we run a DBCC SHRINKFILE with EMPTYFILE parameter and it will automatically start distributing data to the new files.

--Move Data to the blank File Added Recently
use[Adventureworks2014]
Go
DBCC SHRINKFILE(N’Adventureworks2014_Data’,EMPTYFILE)
GO


Step 6: Now check again the Disk usage report:

NOTE; Here you need to ignore the space consumed by the data file because that’s the file space reserved as the initial size. 

This is how you can split the data file. To be sure of the results and consistency of the database I ran the DBCC CHECKDB command.
For more details about the multiple file database performance gains follow blog from Paul
Thanks for your attention: follow me on-Linkedin
Post your suggestions and comments :

Saturday, October 19, 2019


‘Working’/problem/Struck/…/…// with ‘tempdb’ in SQL Server
#‘Recommendations’ to reduce allocation contention in SQL Server ‘tempdb ‘database
#Only One’ tempdb’ Data File (Are you sure..??)


Introduction: 
Have you ever been to any summit, lets spare out some useless directions. I ask you again: Have you ever been to technical summit??. I am sure if you were you must have felt stresses with the only way to reach a destination and then being in the queue to reach the seminar hall.

Imagine the same with the SQL server,Temp DB and Queries. Limite number of ways to reach to the same destination (TempDB: .mdf) queries need to be in the queue to get the required data or the required resource

Imagine the same with the SQL server,Temp DB and Queries. Limite number of ways to reach to the same destination (TempDB: .mdf) queries need to be in the queue to get the required data or the required resource.
 So today we are going to make ourselves expert in the existing confusing resourceful world.
  
Summary: 
SQL Server uses database called tempdb to handle a lot of system tasks : a shared workspace (as well as user-created temp tables.) SQL Server database administrators need to be able to quickly find out what queries and stored procedures are running slow. On a server that is running Microsoft SQL Server, you notice severe blocking when the server is experiencing a heavy load. Dynamic Management Views [sys.dm_exec_request or sys.dm_os_waiting_tasks] indicates that these requests or tasks are waiting for tempdb resources. Additionally, the wait type is PAGELATCH_UP, and the wait.

Detailed Explanation:
 During object creation, two (2) pages must be allocated from a mixed extent and assigned to the new object. One page is for the Index Allocation Map (IAM), and the second is for the first page for the object. SQL Server tracks mixed extents by using the Shared Global Allocation Map (SGAM) page. Each SGAM page tracks about 4 gigabytes of data.
To allocate a page from the mixed extent, SQL Server must scan the Page Free Space (PFS) page to determine which mixed page is free to be allocated. The PFS page keeps track of free space available on every page, and each PFS page tracks about 8000 pages. Appropriate synchronization is maintained to make changes to the PFS and SGAM pages; and that can stall other modifiers for short periods. 
When SQL Server searches for a mixed page to allocate, it always starts the scan on the same file and SGAM page. This causes intense contention on the SGAM page when several mixed-page allocations are underway. This can cause the problems. 

Bottom Root cause :
  •  Non Uniform Growth
  • Insufficient disk I/O response 
  • The regular visit to the SGAM Page for the allocation.
  • The frequent requirement to visit the PFS page to find the page free space.
  • Limited number of concurrent GAM, SGAM basically limited number of files.


The Figure below explains the scenario where the TempDb files are not configured to grow equally and this uneven growth enforces more free file to be utilized more slowing the performance. 


Remedy; (Get rid of it..! At Least When you know you can ..Right!!)

Increase the number of tempdb data files that have equal sizing 
As an example, if the single data file size of tempdb is 8 GB, and the Log file size is 2 GB, the recommendation is to increase the number of data files to eight (8) (each of 1 GB to maintain equal sizing), and leave the log file as is. Having the different data files on separate disks would be provide additional performance benefit. However, this is not required. The files can co-exist on the same disk volume.  
The optimal number of tempdb data files depends on the degree of contention seen in tempdb. As a starting point, you can configure tempdb to be at least equal to the number of logical processors that are assigned for SQL Server. For higher-end systems, the starting number could be eight (8). If the contention is not reduced, you may have to increase the number of data files.

Use trace flags:T-1117 & T-1118 -T1117 - When growing a data file grow all files at the same time so they remain the same size, reducing allocation contention points. -T1118 - When doing allocations for user tables always allocate full extents. Reducing contention of mixed extent allocations 
  
Make each data file the same size; this allows for optimal proportional-fill performance.

References: 
• Microsoft Knowledge Base: 
4099472 PFS page round robin algorithm improvement in SQL Server 2014, 2016 and 2017 • MSSQL Tiger Team Blog: 
Files and trace flags and updates in SQL Server tempdb 

Microsoft / Microsoft 1 / Microsoft 2