Image

WMI rule sucks( :)stuck) while applying SP3 on SQL server 2008R2

Tags

,

I want to share an scenario while applying/upgrading from Older version to new version on SQL server 2008R2.

Patch was trying to apply :- SQL 2008R2 SP2 to SP3.
Windows :- windows server 2008R2 Standard

Issue ->You might can face an below problems while applying SP upgrade (or) wait for the server to come online(pre reboot or post reboot)

A)You have started the SP upgrade and your window stucks at the initial stage of the rules checking i,e-

it stucks at “Windows Management Instrumentation (WMI) service”.

B)Before you apply of SUP upgrade consider you have reboote the box and the server takes longer longer time(c1 or 2hour) and stucks in the “Apply group policy settings……” similar after post reboot (this concept is only for the windows not for our sql upgrade , the sql upgrade is applies for A)).

so worked with an OS team, we able to understand what is causing
1.The WMI service is fine
2.No errors reproted in the errorlogs/eventlogs

how ever there is an File(related to windows, iam not an windows expert) at the OS level it’s for the WMI data -this has been increased the size of 4GB and the path is as below-

C:\windows\system32\wbem\Repository\OBJECTS.DATA – so this OBJECTS.DATA is occupied 4GB.

windows team confirmed there is an work around(it might be an bug ,iam not sure)

work around ->Windows will purge this data with an 2 or 3 reboots required for the servers.

for this activity we have stopped sql server services and put it manual, they have purged & the size reduces to
16MB.
finally we have started the sql & put it to automatic then we started the SP upgrade & those all went fine like Bullet train.

Thank you!

Reason: Invalid attribute/option identifier while opening SQL server Enterprise manger in sql server 2000.

Recetly we had an issue for opening the SQL server enterprise manger in SQL server 2000 by customer and even for us also ->below is the error-

———————————————————————————————
A Connection could not be established to <XXXXXXXXXXX>.

Reason: Invalid attribute/option identifier.

Please verify SQL Server is running and check your SQL Server registration properties (by right-clicking on the <XXXXXX> node) and try again.
———————————————————————————————————————-

Below are the solution which we have worked for the resolution-

1.Reinstall MDAC  to see will that fix the issue.. -No its not fixed

2.Reinstall the client tools alone ->no it didnt worked…

Then the only option we didnt try is to sqlsrv32.dll. so here what we did that
 we have checked in other working machine with the same Image of existing issue one we have renamed the current one to sqlsrv32_old.dll then we have copied to the path

C:\WINNT\SYSTEM32

so then it worked for us everything…

so be aware that-
1.While reinstalling MDAC better to stop the SQL server services so that it can link to all other sql as well.
2.Ensure you shouldnt play with the DLL files….

Logshipping FAQ’s 1(It helps for your interview-very basics)

Hi Guys,

 Just be aware that you might can face in the interview (even though simple question) -what is the new feature that introduced in SQL server 2008 for log shipping ?

A)In earlier editions (< SQL server 2008) you can set the backup/copy/restore job frequency to Minutes and Hours. However now we can schedule to in terms of seconds (how ever I damn sure that it will have lot of other problems as well)

 

B)Backup compressed(log backup compression)

 

Other Faqs

—————

1.Can we configure log shipping if the SQL server service account is running under local system account?

Sol ->yes we can see below per BOL-

“If the SQL Server service account is running under the local system account on the primary server, you must create the backup folder on the primary server and specify the local path to that folder here. The SQL Server service account of the primary server instance must have Read and Write permissions on this folder”.

 

2.What editions requires to configure logshipping?

>>Apart from express all other edition supports but in the versions wise is different

see

—-

A)For SQL server 2000

Server                                      Editions

Primary Server  ->Enterprise or Developer Edition

Secondary Server            ->Enterprise or Developer Edition

Monitor Server ->Any Edition

B)for SQL server 2005(per BOL)-

Requirements

Log shipping has the following requirements:

!) SQL Server 2005 Standard Edition, SQL Server 2005 Workgroup Edition, or SQL Server 2005 Enterprise Edition must be installed on all server instances involved in log shipping.

!!) The servers involved in log shipping should have the same case sensitivity settings.

    !!!)The databases in a log shipping configuration must use the full recovery model or bulk-logged recovery model.

 C)For SQL server 2008 ->all except express..

3.what permission requires for shared folders on Primary and secondary for the service accounts-

>>As per BOL

For the backup job, read/write permissions to the backup directory are required on the following:

 

 The SQL Server service account on the primary server instance.

 The proxy account of the backup job. By default, this is the SQL Server Agent account on    the primary server instance.

 For the copy job, read permissions to the backup directory and write permissions to the copy directory are required by the proxy account of the copy job. By default, this is the SQL Server Agent account on the secondary server instance.

For the restore job, read/write permission to the copy directory are required by the following:

The SQL Server service account on the secondary server instance.

The proxy account of the restore job. By default, this is the SQL Server Agent account on the secondary server instance.

 

 

4)What happens to the log shipping in case if you have added the data file on the primary server ?

 >>if both primary & secondary servers as same Disk configuration settings then you can ignore & secondary will takes up, how ever if you changed anything at the Primary side for ex->you have created the folder & added the folder or you have added to any other drive then you have to restore the Next trn backup (i,e after adding the datafile) with

 MOVE option(note that you have specified the Tuf file as well while restoring the trn by Manually).

 5)what is index operation logging in log shipping?

>>It is fully logged operation so it will replicate on secondary as well.

6)What about If I shrink at the primary database will that log to secondary as database as well?

>>yes it can.

 7)Types of logshipping or types of restoring modes?

>>A)With No recovery  B)Standby/readonly

8)What it TUF file?

>>TUF stands for ->T ->Transaction  U->Undo  F->File  so it is Transaction Undo file.

You know that the log backup/copy/restore works according to the scheduled frequency.

 

TUF-

It contains the Modification that were not committed on the Primary database when the Transaction log backup was in progress and when the log was restored to another database…so at some point of time when the next trn restored on the secondary side then SQL server uses the data from the Undo file and it starts restoring the Incomplete transactions(so here you have to think that next log backup completed from Primary side)….Also it requires to define when ever reconfiguring of the log shipping or during first configure..

 9)My Tuf file deleted unfortunate when the server was shutdown or during the server down time then what will happens to the logshipping?

>>The log shipping will not work, Incase if you have OS level Backup then you can restore the file then you can be cool.

 10)My Logshipping is not working–what to do now?

 >>The first thing you need to look at your Log shipping job history ->Backup/copy/restore –>(think that all jobs are failing) or incase if it is specific then you need to look at the specific one for ex-

Backup job history on Primary

COPY and Restore history on secondary.

 Some reasons the Logshipping can break or stop working..

===============================================================

1. Recent modifications(permissions removed) on the shared folders(Might be on Primary shared folder or on at the secondary side).

2.Human error ->either someone used the option of truncate only or switched the recovery model.

3.date/time for the windows servers unmatching due to any DST activities.

4.As usual ->Datafile added on Primary on different drives ->then you need to apply that on secondary with move until that your log shipping restore job will fail

5.Any I/O,Memory,N/w bottleneck–Please look at the error logs & event logs

6. Corruption.

7.your tuf file is missing.

8.sqllogship.exe –hahahaha some one deleted..

9. Drives full

10. Improper SP/hot fix applied.

11.You may have set the incorrect value for the Out of Sync Alert threshold.

12. Might be you have scheduled the jobs in same time

13.your MSDB database is full.

Continues……

My Log file is full what to do now or how do I approach to fix the issue or at least to get information whats going on?

My Log file is full what to do now or how do I approach to fix the issue or at least to get information whats going on?

Everyone knows that log file usage we cannot control for the user/DB engine activities because it totally depends on the logic of the T-sql codes(i,e DML operations), Number of concurrent operations from the user
for the same database ,optimization(reindex/update statistics)job running, Logn running transaction, open transactions running from very long time,bulk operations…so on..! how ever if we use the best practices at least we can
manage the log file growth.. anyway let us see here–

I will not explain all the information’s but I will explain what steps you should you can follow-

User says that oops My transaction failed because it saying that log file is full..bla bla….. & you saw in the error log it has written error code 9002 along with other information-

well when ever the log file is full then look for the below things-

1.execute the DBCC SQLPERF(LOGSPACE)  ——;verify that your database reached to 99 or 100%

2.then check what is the auto growth set for the Log file —>you observed that the log file was restricted to some MB

Note before going to solution 1 check the step 3

(Solution 1:Ok if it is restricted then where the log file resides(drive) you can execute the below query-
sp_helpfile  –you will get the complete information about log file drive path

so verify that the drive which is holding the log file has enough space in it, the reason why Iam saying here that if the drive has enough space then you can increase the Auto growth for the log file(but ensure that you have increased

according to your need , you shouldn’t go ahead & increase 300GB if it has space because the drive space it may requires for other database files incase if it is used by that drive.. & that drive is not dedicated to that drive, if it is dedicated  to only your database then you will happy -Hope you know what you are going to 🙂

Ok My drive is also full then what I have to ?
well you have to add another log file where the drive has enough space or you need move the file(ensure once moved you have ran the checkdb to ensure the consistency state).

3.Usually some times the log file usage releases automatically without any user actions-
Wow how it is?
A)Execute the below query-

use
go
DBCC OPENTRAN

(or use DMV sys.dm_active_transactions)
verify that there is no open transactions (of course now this is not the case here), Ok you do see there is an open transaction exists-then look in the same output when it was started.. from how long it is running(follow the spid) –so you need to await to till the transaction(it may be user or optimization or any) either to get succeed or failure(if it is failure it throws an error why its failed so you need to analyze accordingly).

so once those transaction completed successfully then incase of Database is in full recovery model then the next trn backup will clear up the space of VLFS & it allows the next users transaction activities-
Incase if simple recovery model -once the transaction done automatically it will truncates the space internally.. so it usage will be back normal.

some times the developer or any other users they might ran the ad-hoc queries or implicit transaction(without commit/End tran) forgot to end the transaction or work or without any knowledge they ran on production then that case
you have deal carefully dont take of killing the spid ASAP instead of that get the information from the user what is the transaction & what is doing, can we kill etc..!

the other way is to look at the sys.database or execute the below query

SELECT [name], recovery_model_desc, log_reuse_wait_desc
FROM sys.databases
so the output shows 3 column called ->Name,recovery_model_Desc,log_reuse_wait_desc
so interesting column is log_reuse_wait_desc –;so here it shows that why the transaction log file(in terms of VLFs) is not able to reuse or why it is not releasing the space in it-
ex-

NOTHING   ———-can be reuse
CHECKPOINT  ————-currently it is performing checkpoint
LOG_BACKUP  ——–log backup is running now
ACTIVE_BACKUP_OR_RESTORE  ——-currently active Backup/restore-in progress.
ACTIVE_TRANSACTION   ——–active transaction exists..
DATABASE_MIRRORING
REPLICATION
DATABASE_SNAPSHOT_CREATION
LOG_SCAN
AVAILABILITY_REPLICA

so according to the above you can easily get that why it is not able to reuse or is that free.. so that you will have better control..

4.go to step 3 & go for the  solution :1 (where this solution 1 you will use –it is better to increase the log file growth to avoid the same repeated issue in the future shouldn’t occur(note that recovery model also plays here)-

————————————————————————————-
Some of the best way to avoid the Rapid log file growth
————————————————————————————–
1.If the database is in full recovery model then schedule the log backup more frequently..

2.Don’t run your transaction as an Bulk or single transaction to perform very Large DML operation instead of that perform chunk wise (i,e update few number of records).

3.Don’t run the ad-hoc queries directly on the Production instead of that run on the test server where it has similar setting as Prod(exactly mirror )..

4.When ever if you want to perform larger operations you can switch to either Simple or Bulk-logged but it doesn’t mean that the log file will not grow..but you can think about of logging operation when compare to the full recovery model.

5.Incase if you have plan to run the bulk operations — then before running verify what is the log file usage, if needed then take the log backup then test it out..

Hope if I remember some more.. I will add it up here-

But finally –You cannot control the log file growth from the rapid but you can manage if you follow the best practices –

References
——————–
Managing the Transaction Log
http://msdn.microsoft.com/en-us/library/ms345382%28v=sql.105%29.aspx
Transaction Log Truncation
http://msdn.microsoft.com/en-us/library/ms189085%28v=sql.105%29.aspx
Managing the Size of the Transaction Log File
http://msdn.microsoft.com/en-us/library/ms365418%28v=sql.105%29.aspx

Unable to put my database to offline :)

Usually I can see that users are trying to put the database to offline either there planned activity or for some other reasons, but they are unable to put the database offline,so what is the wrong?

1.When ever if you are putting database to offline ensure that no users connected to that database & try to see did any other SQL server agent,windows schedular are not running with refernce to this database(The reason I mentioned here

just to ensure thats it).

so what if users are connected ->Can I go ahead and kill… 🙂
A)Please donot do this untill unless if the users says or confirms that you can kill them, because unknowingly if you kill you mighe be need to wait for the transactions to get it rollback completely so Imagine if somethings leads
the corruption(but any way I dont want to talk here out of information).
B)Better ask the user to comeout from his session.
C)Incase if you the batch Id that usually connects every time(Ofcourse you can kill ->I will show how it is) if you kill then it will reconnect again.. so ask the Application team to stop.

2.Then how can I check that who is connected to database.
Yes you can use sp_who2 ->But please donot use this one because we have good DMVs to get the information over here-

so try using the below scripts-

!)This shows the number of connections per each DB:
SELECT
    DB_NAME(dbid) as DBName,
    COUNT(dbid) as NumberOfConnections,
    loginame as LoginName
FROM
    sys.sysprocesses
WHERE
    dbid > 0
GROUP BY
    dbid, loginame

Incase if you are using the 2005+ onwards use the below DMVs to get the informations-
sys.dm_exec_connections
sys.dm_exec_sessions
sys.dm_exec_requests

Other way you can see
=================
A)You can use the GUI to use for ex ->in sql server 2008 ->right click on the SQL instance from SSMS(SQL server Mangement studio)->Activity Monitor-> in right side you can see Overview page
 go to Processes tab
select which one you want( I mean spid) ->right click if you want to Kill you can & if you want to know what he is doing click on Details.

Incase of SQL server 2005->you can use Activity monitor & you can filter it out how you want..the best thing is that when you are taking the database offline you want to kill all Active connects for that database
you will get the  “DROP CONNECTION” check box ->Just check the box click on OK –>your work finished 🙂

B)The other simple way is as mentioned below-
Open SSMS->new query window-> run the below T-SQL
USE master
GO
ALTER DATABASE <your dbanme here>
SET OFFLINE WITH ROLLBACK IMMEDIATE
GO

It makes the Database offline..

Still you can get the scripts to kill one short time but usually I follow the above one-

Learn
========
Dont relay on sp_who2 ->Try to use DMVs.
Think different way how you can put..

Happy learning 🙂

Awarded as MCC in MSDN for 2012.

Hi All,

I awarded as an MCC today(After sending an Mail to MCCGA@microsoft.com), it Makes me more happy but the most important thing is that I learn from Many technical guys around the world & its great pleasure to learn their experience too 🙂

Iam going to start My blog for soon ->Start from step by steps to help others & to learn from myself If I did mistake..!

Thanks,

Rama Udaya.K