Transaction Logs Filesystem is Full
The error occurs when the transactions log's filesystem is full
A - Errors
Message on db2diag.log
2008-03-14-03.25.11.261259+000 I70861C595 LEVEL: Error
PID : 286942 TID : 1 PROC : db2bp
INSTANCE: db2inst1 NODE : 000
APPID : *LOCAL.db2inst1.080314032027
FUNCTION: DB2 UDB, oper system services, sqlowrite, probe:200
MESSAGE : ZRC=0x850F000C=-2062614516=SQLO_DISK "Disk full."
DIA8312C Disk was full.
DATA #1 : File handle, PD_TYPE_SQO_FILE_HDL, 8 bytes
0x2FF21A28 : 0000 0004 0000 0000 ........
DATA #2 : unsigned integer, 4 bytes
307
DATA #3 : signed integer, 4 bytes
270
DATA #4 : signed integer, 4 bytes
2
B - Possible Causes
Cause One - The filesystem designated for log files isn't enough for the required number and size of log files needed.
Cause Two - Applications are generating more transactions log than the server can handle; it means, it's being created transactions logs by the applications faster than the server process of sending them to TSM.
C - Investigating
First of all, check if the database is a partitioned; go to the sqllib directory and run:
cat db2nodes.cfg
If there is more than one node, it means that database is a partitioned one. We'll treat here a case of a non-partitioned database.
After that, check the filesystem where should be the problem. Start using :
df -k
To discover the full path to the filesystem. Go to that filesystem and list all existing logs at the specified directory (ls -ltr). In case of a large number of logs, you can check which one is the active log and what is the number of active logs needed by the DB2. Connect to the database and type:
db2pd -db dbname -logs
It will show you the first active log and the number of primary and secondary logs. After that, you can compare with the log files stored in your filesystem and check if all logs are being sent to TSM and removed from filesystem.
D - Resolution
After your analysis, you can solve this problem using two options:
Option 1 - Increasing filesystem size to be able to store the current number of logs;
Option 2 - Send old log files to TSM and remove them from the filesystem.
In case of option 2, you can follow the steps above:
Check which log files are not active and not needed by DB2 anymore:
Check TSM to ensure what log files was sent to it
dsmc q b "logfilenamewithfullpath"
Send to TSM all old log files. Be careful to not send any of active logs to TSM, or it will cause a database crash
dsmc i "oldlogfilenamewithfullpath"
After sending the old log files to TSM, you can delete them from the filesystem
rm oldlogfilename
Next step, you can again check the size of the filesystem and compare if it is with an acceptable free space
df -k .
Now, if the problem still persists after this procedure, you can trace what application is causing this large number of logs, using the command
db2pd -db dbname - transactions
Which will show you what applications are reading and/or writing rows on the table, what is the agent id from the application and other information.
If you need, you can obtain more detailed information about the running application using
db2 get snapshot for application agentid agid
Once collected all details about the bad application, you are now able to ask to the customer an authorization to kill it.