At one of my client’s implementation, i recently noticed a lot of below errors. They have 4 ohs nodes, reverse proxying on various FMW components. 3 of them were having below error messages and overall performance was also poor.
“No locks available: apr_proc_mutex_lock failed. Attempting to shutdown process gracefully”
Process Ping Failed: ohs1~OHS~OHS~1 (749623442:1533343490) [The connection receive timed out]
[2016-05-15T11:24:35.4496+02:00] [OHS] [WARNING:32] [OHS-9999] [core.c] [host_id: am3hh068] [host_addr: 188.8.131.52] [pid: 15335490] [tid: 1] [user: oradba] [VirtualHost: main] child process 10879076 still did not exit, sending a SIGTERM
[2016-05-15T11:24:35.4496+02:00] [OHS] [INCIDENT_ERROR:10] [OHS-9999] [core.c] [host_id: em12c.test.com] [host_addr: 10.10.10.10] [pid: 17093] [tid: 140384688568064] [user: oracle] [VirtualHost: main] (116)Stale file handle: apr_proc_mutex_lock failed. Attempting to shutdown process gracefully.
Upon analysis I found out that OHSlock file (HTTP.LOCK FILE ) was placed on the shared NFS mount. The “LockFile directive” in httpd.conf file sets the path to the lockfile used. If the lock file is placed on the NFS mount, any delay in response will force the OPMN process to restart the OHS.
Oracle Documentation also states that :-
Functional or performance issues may be encountered when an Oracle HTTP Server component is created on a shared filesystem, including NFS (Network File System). In particular, lock files or Unix sockets used by OHS may not work or may have severe performance degradation; WLS requests routed by mod_wl_ohs may have severe performance degradation due to filesystem accesses in the default configuration.
To resolve above issue:-
a) Set the Lockfile path in <ifmpm_prefork_module> and <IfModule mpm_worker_module> to local disk (Which is not NFS mounted)
b) Save the changes.
c) Restart OHS instances.
LockFile ” <LOCAL_DISK_PATH>”