Recently, we noticed on a staging environment that the LOGPRIMARY setting was set to 35, yet the number of logs on the log path was only 21.
Number of primary log files (LOGPRIMARY) = 35 35 Number of secondary log files (LOGSECOND) = 45 45 db2inst1# ls -ltr total 13440704 -rw------- 1 db2inst1 db2iadm1 512 Dec 1 19:10 SQLLPATH.TAG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 6 12:18 S0063132.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 6 13:32 S0063133.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 6 13:36 S0063134.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 6 14:36 S0063135.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 6 14:55 S0063136.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 6 15:51 S0063137.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 6 16:13 S0063138.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 6 18:02 S0063139.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 7 11:28 S0063140.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 7 14:17 S0063141.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 8 13:19 S0063142.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 9 05:06 S0063143.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 11 09:26 S0063145.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 11 09:26 S0063144.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 11 10:25 S0063146.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 11 10:51 S0063147.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 11 11:17 S0063148.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 11 11:39 S0063149.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 16 17:37 S0063129.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 18 06:59 S0063130.LOG -rw------- 1 db2inst1 db2iadm1 327688192 Jan 21 13:06 S0063131.LOG
How was anyone able to connect to the database if DB2 could not allocate the number of logs required? Well, here’s what actually happened. When DB2 was initially started, there was plenty of space on the filesystem where the log path is located, so db2 started fine and had 35 logs allocated. Eventually the space ran out (it is after all a staging environment), so DB2 stopped allocating more logs than the available space. To understand this better, here’s what happens behind the scenes. When a log file fills up, DB2 has to copy that log file to the archive log directory and then rename to allocate more logs. In this case, because there was no space to be able to do that, it just managed to continue functioning with the available resources. DB2 ignores that kind of error.
2015-10-21-16.33.04.258307-240 E1738A463 LEVEL: Event /primary Block size = 8192 bytes Total size = 18169724928 bytes Free size = 0 bytes Total # of inodes = 5664 FS name = Mount point = FSID = 18446744071563116545 FS type name = vxfs DIO/CIO mount opt = None Device type = N/A FS type = 0x2 CALLSTCK: (Static functions may not be resolved correctly, as they are resolved to the nearest symbol) [0] 0xC00000001444B740 2015-10-22-09.35.16.499947-240 I172033A432 LEVEL: Info PID : 27322 TID : 94445 PROC : db2sysc 0 INSTANCE: db2inst1 NODE : 000 DB : SAMPLE EDUID : 94445 EDUNAME: db2loggr (SAMPLE) 0 FUNCTION: DB2 UDB, data protection services, sqlpgInitRecoverable, probe:8210 MESSAGE : Not able to allocate all primary log files due to DISK FULL. This error is ignored.
So, please make sure that you have plenty of space allocated to the filesystem where the logs reside. If DB2 is already up and running, it won’t complain. But, if for some reason you did not free up space and restarted the DB2 engine, you won’t be able to do that, till you free up space. Planning is key.