My Log file is full what to do now or how do I approach to fix the issue or at least to get information whats going on?
Everyone knows that log file usage we cannot control for the user/DB engine activities because it totally depends on the logic of the T-sql codes(i,e DML operations), Number of concurrent operations from the user
for the same database ,optimization(reindex/update statistics)job running, Logn running transaction, open transactions running from very long time,bulk operations…so on..! how ever if we use the best practices at least we can
manage the log file growth.. anyway let us see here–
I will not explain all the information’s but I will explain what steps you should you can follow-
User says that oops My transaction failed because it saying that log file is full..bla bla….. & you saw in the error log it has written error code 9002 along with other information-
well when ever the log file is full then look for the below things-
1.execute the DBCC SQLPERF(LOGSPACE) ——;verify that your database reached to 99 or 100%
2.then check what is the auto growth set for the Log file —>you observed that the log file was restricted to some MB
Note before going to solution 1 check the step 3
(Solution 1:Ok if it is restricted then where the log file resides(drive) you can execute the below query-
sp_helpfile –you will get the complete information about log file drive path
so verify that the drive which is holding the log file has enough space in it, the reason why Iam saying here that if the drive has enough space then you can increase the Auto growth for the log file(but ensure that you have increased
according to your need , you shouldn’t go ahead & increase 300GB if it has space because the drive space it may requires for other database files incase if it is used by that drive.. & that drive is not dedicated to that drive, if it is dedicated to only your database then you will happy -Hope you know what you are going to 🙂
Ok My drive is also full then what I have to ?
well you have to add another log file where the drive has enough space or you need move the file(ensure once moved you have ran the checkdb to ensure the consistency state).
3.Usually some times the log file usage releases automatically without any user actions-
Wow how it is?
A)Execute the below query-
(or use DMV sys.dm_active_transactions)
verify that there is no open transactions (of course now this is not the case here), Ok you do see there is an open transaction exists-then look in the same output when it was started.. from how long it is running(follow the spid) –so you need to await to till the transaction(it may be user or optimization or any) either to get succeed or failure(if it is failure it throws an error why its failed so you need to analyze accordingly).
so once those transaction completed successfully then incase of Database is in full recovery model then the next trn backup will clear up the space of VLFS & it allows the next users transaction activities-
Incase if simple recovery model -once the transaction done automatically it will truncates the space internally.. so it usage will be back normal.
some times the developer or any other users they might ran the ad-hoc queries or implicit transaction(without commit/End tran) forgot to end the transaction or work or without any knowledge they ran on production then that case
you have deal carefully dont take of killing the spid ASAP instead of that get the information from the user what is the transaction & what is doing, can we kill etc..!
the other way is to look at the sys.database or execute the below query
SELECT [name], recovery_model_desc, log_reuse_wait_desc
so the output shows 3 column called ->Name,recovery_model_Desc,log_reuse_wait_desc
so interesting column is log_reuse_wait_desc –;so here it shows that why the transaction log file(in terms of VLFs) is not able to reuse or why it is not releasing the space in it-
NOTHING ———-can be reuse
CHECKPOINT ————-currently it is performing checkpoint
LOG_BACKUP ——–log backup is running now
ACTIVE_BACKUP_OR_RESTORE ——-currently active Backup/restore-in progress.
ACTIVE_TRANSACTION ——–active transaction exists..
so according to the above you can easily get that why it is not able to reuse or is that free.. so that you will have better control..
4.go to step 3 & go for the solution :1 (where this solution 1 you will use –it is better to increase the log file growth to avoid the same repeated issue in the future shouldn’t occur(note that recovery model also plays here)-
Some of the best way to avoid the Rapid log file growth
1.If the database is in full recovery model then schedule the log backup more frequently..
2.Don’t run your transaction as an Bulk or single transaction to perform very Large DML operation instead of that perform chunk wise (i,e update few number of records).
3.Don’t run the ad-hoc queries directly on the Production instead of that run on the test server where it has similar setting as Prod(exactly mirror )..
4.When ever if you want to perform larger operations you can switch to either Simple or Bulk-logged but it doesn’t mean that the log file will not grow..but you can think about of logging operation when compare to the full recovery model.
5.Incase if you have plan to run the bulk operations — then before running verify what is the log file usage, if needed then take the log backup then test it out..
Hope if I remember some more.. I will add it up here-
But finally –You cannot control the log file growth from the rapid but you can manage if you follow the best practices –
Managing the Transaction Log
Transaction Log Truncation
Managing the Size of the Transaction Log File