Before reading this post, please refer to the known issues in the README document of your patch first, maybe you can find a solution there. Second, you have to know that the scenario I'm going to introduce in this post, is just one pattern of problems.
Failed to apply a proactive bundle patch (BP) 31307682 Oracle database 12.1.0.2, a RAC database based on Grid infrastructure.
[root@primary01 ~]# export PATH=$PATH:/u01/app/12.1.0/grid/OPatch
[root@primary01 ~]# opatchauto apply /home/patches/31307682/
OPatchauto session is initiated at Wed Sep 19 13:04:49 2020
System initialization log file is /u01/app/12.1.0/grid/cfgtoollogs/opatchautodb/systemconfig2020-09-19_01-04-55PM.log.
Session log file is /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2020-09-19_01-05-47PM.log
The id for this session is K8ZC
Executing OPatch prereq operations to verify patch applicability on home /u01/app/12.1.0/grid
Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.1.0/db_1
Patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db_1
Patch applicability verified successfully on home /u01/app/12.1.0/grid
Verifying SQL patch applicability on home /u01/app/oracle/product/12.1.0/db_1
SQL patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db_1
Preparing to bring down database service on home /u01/app/oracle/product/12.1.0/db_1
Successfully prepared home /u01/app/oracle/product/12.1.0/db_1 to bring down database service
Bringing down CRS service on home /u01/app/12.1.0/grid
Prepatch operation log file location: /u01/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_primary01_2020-09-19_01-11-06PM.log
CRS service brought down successfully on home /u01/app/12.1.0/grid
Performing prepatch operation on home /u01/app/oracle/product/12.1.0/db_1
Perpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db_1
Start applying binary patch on home /u01/app/oracle/product/12.1.0/db_1
Binary patch applied successfully on home /u01/app/oracle/product/12.1.0/db_1
Performing postpatch operation on home /u01/app/oracle/product/12.1.0/db_1
Postpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db_1
Start applying binary patch on home /u01/app/12.1.0/grid
Binary patch applied successfully on home /u01/app/12.1.0/grid
Starting CRS service on home /u01/app/12.1.0/grid
Postpatch operation log file location: /u01/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_primary01_2020-09-19_01-40-25PM.log
opatchauto failed with error code 42
Below is the message of failure of opatchauto. It's a little bit longer, but I highlighted some important message.
Failed to start CRS service on home /u01/app/12.1.0/grid
Execution of [GIStartupAction] patch action failed, check log for more details. Failures:
Patch Target : primary01->/u01/app/12.1.0/grid Type[crs]
Details: [
---------------------------Patching Failed---------------------------------
Command execution failed during patching in home: /u01/app/12.1.0/grid, host: primary01.
Command failed: /u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/opatchautocfg/db/dbtmp/bootstrap_primary01/patchwork/crs/install /u01/app/12.1.0/grid/opatchautocfg/db/dbtmp/bootstrap_primary01/patchwork/crs/install/rootcrs.pl -postpatch
Command failure output:
Using configuration parameter file: /u01/app/12.1.0/grid/opatchautocfg/db/dbtmp/bootstrap_primary01/patchwork/crs/install/crsconfig_params
2020/09/16 13:40:36 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2020/09/16 13:43:29 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2020/09/16 13:43:36 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'
CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'primary01'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'primary01'
CRS-2677: Stop of 'ora.drivers.acfs' on 'primary01' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'primary01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'primary01'
CRS-2672: Attempting to start 'ora.evmd' on 'primary01'
CRS-2676: Start of 'ora.mdnsd' on 'primary01' succeeded
CRS-2676: Start of 'ora.evmd' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'primary01'
CRS-2676: Start of 'ora.gpnpd' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'primary01'
CRS-2676: Start of 'ora.gipcd' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'primary01'
CRS-2676: Start of 'ora.cssdmonitor' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'primary01'
CRS-2672: Attempting to start 'ora.diskmon' on 'primary01'
CRS-2676: Start of 'ora.diskmon' on 'primary01' succeeded
CRS-2676: Start of 'ora.cssd' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'primary01'
CRS-2672: Attempting to start 'ora.ctssd' on 'primary01'
CRS-2676: Start of 'ora.ctssd' on 'primary01' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'primary01'
CRS-2676: Start of 'ora.asm' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'primary01'
CRS-2676: Start of 'ora.storage' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'primary01'
CRS-2676: Start of 'ora.crf' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'primary01'
CRS-2676: Start of 'ora.crsd' on 'primary01' succeeded
CRS-6017: Processing resource auto-start for servers: primary01
CRS-2672: Attempting to start 'ora.net2.network' on 'primary01'
CRS-2672: Attempting to start 'ora.net1.network' on 'primary01'
CRS-2676: Start of 'ora.net1.network' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.ons' on 'primary01'
CRS-2676: Start of 'ora.net2.network' on 'primary01' succeeded
CRS-2673: Attempting to stop 'ora.primary01_2.vip' on 'primary02'
CRS-2673: Attempting to stop 'ora.primary01.vip' on 'primary02'
CRS-2673: Attempting to stop 'ora.LISTENER2_SCAN1_NET2.lsnr' on 'primary02'
CRS-2677: Stop of 'ora.LISTENER2_SCAN1_NET2.lsnr' on 'primary02' succeeded
CRS-2673: Attempting to stop 'ora.scan1_net2.vip' on 'primary02'
CRS-2677: Stop of 'ora.primary01.vip' on 'primary02' succeeded
CRS-2672: Attempting to start 'ora.primary01.vip' on 'primary01'
CRS-2677: Stop of 'ora.primary01_2.vip' on 'primary02' succeeded
CRS-2672: Attempting to start 'ora.primary01_2.vip' on 'primary01'
CRS-2677: Stop of 'ora.scan1_net2.vip' on 'primary02' succeeded
CRS-2672: Attempting to start 'ora.scan1_net2.vip' on 'primary01'
CRS-2676: Start of 'ora.ons' on 'primary01' succeeded
CRS-2676: Start of 'ora.primary01.vip' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'primary01'
CRS-2676: Start of 'ora.primary01_2.vip' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER2.lsnr' on 'primary01'
CRS-2676: Start of 'ora.scan1_net2.vip' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER2_SCAN1_NET2.lsnr' on 'primary01'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.smalldb.db' on 'primary01'
CRS-2676: Start of 'ora.LISTENER2.lsnr' on 'primary01' succeeded
CRS-2676: Start of 'ora.LISTENER2_SCAN1_NET2.lsnr' on 'primary01' succeeded
CRS-2672: Attempting to start 'ora.compdb.db' on 'primary01'
CRS-5017: The resource action "ora.smalldb.db start" encountered the following error:
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54323 (asmadmin)
. For details refer to "(:CLSN00107:)" in "/u01/app/12.1.0/grid/log/primary01/client/oraagent_1799.log".
CRS-2674: Start of 'ora.smalldb.db' on 'primary01' failed
CRS-2679: Attempting to clean 'ora.smalldb.db' on 'primary01'
CRS-2681: Clean of 'ora.smalldb.db' on 'primary01' succeeded
CRS-2676: Start of 'ora.compdb.db' on 'primary01' succeeded
===== Summary of resource auto-start failures follows =====
CRS-2807: Resource 'ora.smalldb.db' failed to start automatically.
CRS-6016: Resource auto-start has completed for server primary01
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [ROLLING PATCH]. The cluster active patch level is [0].
SQL Patching tool version 12.1.0.2.0 Production on Wed Sep 19 13:49:13 2020
Copyright (c) 2012, 2017, Oracle. All rights reserved.
Log file for this invocation: /u01/app/grid/cfgtoollogs/sqlpatch/sqlpatch_5107_2020_09_16_13_49_13/sqlpatch_invocation.log
Connecting to database...OK
Note: Datapatch will only apply or rollback SQL fixes for PDBs
that are in an open state, no patches will be applied to closed PDBs.
Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
(Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done
Current state of SQL patches:
Bundle series DBBP:
ID 200714 in the binary registry and not installed in any PDB
Adding patches to installation queue and performing prereq checks...
Installation queue:
For the following PDBs: CDB$ROOT PDB$SEED PRIMARYCLUSTER
Nothing to roll back
The following patches will be applied:
31001106 (DATABASE BUNDLE PATCH 12.1.0.2.200714)
Installing patches...
Patch installation complete. Total patches installed: 3
Validating logfiles...
Patch 31001106 apply (pdb CDB$ROOT): SUCCESS
logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31001106/23710733/31001106_apply__MGMTDB_CDBROOT_2020Sep16_13_51_22.log (no errors)
Patch 31001106 apply (pdb PDB$SEED): WITH ERRORS
logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31001106/23710733/31001106_apply__MGMTDB_PDBSEED_2020Sep16_13_56_41.log (errors)
Error at line 3160: Warning: Package Body altered with compilation errors.
Patch 31001106 apply (pdb PRIMARYCLUSTER): WITH ERRORS
logfile: /u01/app/grid/cfgtoollogs/sqlpatch/31001106/23710733/31001106_apply__MGMTDB_PRIMARYCLUSTER_2020Sep16_13_56_41.log (errors)
Error at line 2967: Warning: Package Body altered with compilation errors.
Please refer to MOS Note 1609718.1 and/or the invocation log
/u01/app/grid/cfgtoollogs/sqlpatch/sqlpatch_5107_2020_09_16_13_49_13/sqlpatch_invocation.log
for information on how to resolve the above errors.
SQL Patching tool complete on Wed Sep 19 14:04:50 2020
2020/09/16 14:04:50 CLSRSC-488: Patching the Grid Infrastructure Management Repository database failed.
After fixing the cause of failure Run opatchauto resume
]
OPATCHAUTO-68061: The orchestration engine failed.
OPATCHAUTO-68061: The orchestration engine failed with return code 1
OPATCHAUTO-68061: Check the log for more details.
OPatchAuto failed.
OPatchauto session completed at Wed Sep 19 14:04:54 2020
Time taken to complete the session 60 minutes, 6 seconds
opatchauto failed with error code 42
Actually, OPATCHAUTO-68061 is a very general error code, it may not be helpful to our patching.
After doing some checks on the cluster, I found all services are online except one, a single-instance database.
[grid@primary01 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora.DATA.dg ora....up.type ONLINE ONLINE primary01
ora....ER.lsnr ora....er.type ONLINE ONLINE primary01
ora....R2.lsnr ora....er.type ONLINE ONLINE primary01
ora....T2.lsnr ora....er.type ONLINE ONLINE primary01
ora....T2.lsnr ora....er.type ONLINE ONLINE primary02
ora....T2.lsnr ora....er.type ONLINE ONLINE primary02
ora....T2.lsnr ora....er.type ONLINE ONLINE primary02
ora....N1.lsnr ora....er.type ONLINE ONLINE primary02
ora....N2.lsnr ora....er.type ONLINE ONLINE primary02
ora....N3.lsnr ora....er.type ONLINE ONLINE primary02
ora....N4.lsnr ora....er.type ONLINE ONLINE primary02
ora.MGMTLSNR ora....nr.type ONLINE ONLINE primary01
ora.asm ora.asm.type ONLINE ONLINE primary01
ora.compdb.db ora....se.type ONLINE ONLINE primary01
ora.cvu ora.cvu.type ONLINE ONLINE primary02
ora.mgmtdb ora....db.type ONLINE ONLINE primary01
ora....network ora....rk.type ONLINE ONLINE primary01
ora....network ora....rk.type ONLINE ONLINE primary01
ora.oc4j ora.oc4j.type ONLINE ONLINE primary02
ora.ons ora.ons.type ONLINE ONLINE primary01
ora....SM1.asm application ONLINE ONLINE primary01
ora....01.lsnr application ONLINE ONLINE primary01
ora....01.lsnr application ONLINE ONLINE primary01
ora....y01.ons application ONLINE ONLINE primary01
ora....y01.vip ora....t1.type ONLINE ONLINE primary01
ora....1_2.vip ora....t2.type ONLINE ONLINE primary01
ora....SM2.asm application ONLINE ONLINE primary02
ora....02.lsnr application ONLINE ONLINE primary02
ora....02.lsnr application ONLINE ONLINE primary02
ora....y02.ons application ONLINE ONLINE primary02
ora....y02.vip ora....t1.type ONLINE ONLINE primary02
ora....2_2.vip ora....t2.type ONLINE ONLINE primary02
ora.scan1.vip ora....ip.type ONLINE ONLINE primary02
ora....et2.vip ora....ip.type ONLINE ONLINE primary01
ora.scan2.vip ora....ip.type ONLINE ONLINE primary02
ora....et2.vip ora....ip.type ONLINE ONLINE primary02
ora.scan3.vip ora....ip.type ONLINE ONLINE primary02
ora....et2.vip ora....ip.type ONLINE ONLINE primary02
ora.scan4.vip ora....ip.type ONLINE ONLINE primary02
ora....et2.vip ora....ip.type ONLINE ONLINE primary02
ora.smalldb.db ora....se.type ONLINE OFFLINE
An Instance Startup Failed
I noticed there's a single-instance database called SMALLDB did not startup automatically by CRS. Why?
Let's check the alert log of SMALLDB and look for startup message, we found:
[oracle@primary01 ~]$ vi /u01/app/oracle/diag/rdbms/smalldb/SMALLDB/trace/alert_SMALLDB.log
...
Errors in file /u01/app/oracle/diag/rdbms/smalldb/SMALLDB/trace/SMALLDB_vktm_62613.trc:
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_egid failed with status: 1
ORA-27301: OS failure message: Operation not permitted
ORA-27302: failure occurred at: skgpwinit6
ORA-27303: additional information: startup egid = 54321 (oinstall), current egid = 54323 (asmadmin)
No matter what, I decide to resume the process without doing anything.
[root@primary01 ~]# opatchauto resume
OPatchauto session is initiated at Wed Sep 19 15:41:12 2020
Session log file is /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2020-09-19_03-41-13PM.log
Resuming existing session with id RBC7
Starting CRS service on home /u01/app/12.1.0/grid
Postpatch operation log file location: /u01/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_primary01_2020-09-19_03-41-34PM.log
CRS service started successfully on home /u01/app/12.1.0/grid
Preparing home /u01/app/oracle/product/12.1.0/db_1 after database service restarted
No step execution required.........
Trying to apply SQL patch on home /u01/app/oracle/product/12.1.0/db_1
SQL patch applied successfully on home /u01/app/oracle/product/12.1.0/db_1
OPatchAuto successful.
--------------------------------Summary--------------------------------
Patching is completed successfully. Please find the summary as follows:
Host:primary01
RAC Home:/u01/app/oracle/product/12.1.0/db_1
Version:12.1.0.2.0
Summary:
==Following patches were SKIPPED:
Patch: /home/patches/31307682/26983807
Reason: This patch is not applicable to this specified target type - "rac_database"
Patch: /home/patches/31307682/31097826
Reason: This patch is not applicable to this specified target type - "rac_database"
==Following patches were SUCCESSFULLY applied:
Patch: /home/patches/31307682/31001106
Log: /u01/app/oracle/product/12.1.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-09-19_14-36-25PM_1.log
Patch: /home/patches/31307682/31136382
Log: /u01/app/oracle/product/12.1.0/db_1/cfgtoollogs/opatchauto/core/opatch/opatch2020-09-19_14-36-25PM_1.log
Host:primary01
CRS Home:/u01/app/12.1.0/grid
Version:12.1.0.2.0
Summary:
==Following patches were SUCCESSFULLY applied:
Patch: /home/patches/31307682/26983807
Log: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-09-19_14-50-49PM_1.log
Patch: /home/patches/31307682/31001106
Log: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-09-19_14-50-49PM_1.log
Patch: /home/patches/31307682/31097826
Log: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-09-19_14-50-49PM_1.log
Patch: /home/patches/31307682/31136382
Log: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2020-09-19_14-50-49PM_1.log
OPatchauto session completed at Wed Sep 19 15:53:14 2020
Time taken to complete the session 12 minutes, 2 seconds
It's successful this time! In fact, I didn't solve it, I just kept going by executing opatchauto resume.
Also note, a document Doc ID 2225042.1 from My Oracle Support (MOS) may explain the issue.
By the way, don't forget to apply the bundle patch (BP) on other nodes one by one. In fact, for 12.1, I prefer Patch Set Update (PSU) than BP, I have talked about it in How to Patch Grid and Oracle 12.1.