Parallel Recompilation using ultrp.sql script

When we do major upgrades, the recompilation of all objects is required. In some cases the number of objects could be enormous,  In such scenario we use utlrp script which allows the recompilation in parallel. The script uses UTL_RECOMP package with parameter 0, if no number of threads specified. What is the number of parallel threads script will actually use?

Well, based on oracle docs it is obtained fro job_queue_processes

However, by default the job_queue_processes is 1000! So this cannot be true.

To keep story short, after some investigation, it looks like the following formula is used: CPU_COUNT*THREADS_PER_CPU. (In case these values cannot be obtained, then number of threads will be 1 )

CONCLUSION:
The recompilation could be IO bound so the careful recompilation parallelism is important. Its a good idea to check spfile parameters cpu_count and parallel_threads_per_cpu
before utlrp script execution to make sure that we do not create artificial contention.

memlock configuration on linux for 12c part I

Going through the Oracle 12c Installation docs for linux, I came across the following memlock requirement:

So what is ‘memlock’ and where this requirements are coming from?

Lets see how it works, I have 1GB (for a reason) physical memory configured and SGA_TARGET=900M on 12.1.0.2

1. Lets configure Hugepages and set memlock much lower then Hugepages configured and see whats going to happen.

[oracle@single1 ~]$ cat /etc/sysctl.conf |  grep vm
vm.nr_hugepages = 454

[oracle@single1 ~]$ ulimit -l
217728

[root@single1 ~]# grep Huge /proc/meminfo
HugePages_Total: 447
HugePages_Free: 447
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

Upon the instance startup, oracle provided the following in the alertlog

Starting ORACLE instance (normal) (OS id: 2308)
Thu Sep 08 00:52:56 2016
CLI notifier numLatches:3 maxDescs:519
Thu Sep 08 00:52:56 2016
**********************************************************************
Thu Sep 08 00:52:56 2016
Dump of system resources acquired for SHARED GLOBAL AREA (SGA)

Thu Sep 08 00:52:56 2016
Per process system memlock (soft) limit = 213M
Thu Sep 08 00:52:56 2016
Expected per process system memlock (soft) limit to lock
SHARED GLOBAL AREA (SGA) into memory: 902M
Thu Sep 08 00:52:56 2016
Available system pagesizes:
4K, 2048K
Thu Sep 08 00:52:56 2016
Supported system pagesize(s):
Thu Sep 08 00:52:56 2016
PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES ERROR(s)
Thu Sep 08 00:52:56 2016
4K Configured 5 228357 NONE
Thu Sep 08 00:52:56 2016
2048K 447 451 5 NONE
Thu Sep 08 00:52:56 2016
RECOMMENDATION:
Thu Sep 08 00:52:56 2016
1. For optimal performance, configure system with expected number
of pages for every supported system pagesize prior to the next
instance restart operation.
Thu Sep 08 00:52:56 2016
2. Increase per process memlock (soft) limit to at least 902MB
to lock 100% of SHARED GLOBAL AREA (SGA) pages into physical memory
Thu Sep 08 00:52:56 2016
**********************************************************************

[root@single1 ~]# grep Huge /proc/meminfo
HugePages_Total: 447
HugePages_Free: 442
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

What we see now that only 5 of Hugepages are used and system is experiencing heavy trashing. The instance has never opened and I aborted it after 1 h of wait.

[oracle@single1 ED12]$ vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
9 5 327268 8276 76 2548 25329 5873 111710 5943 7419 3964 1 81 9 10 0
8 9 327192 8304 76 2780 31724 4704 147192 4712 7993 4184 1 86 0 13 0
12 6 327416 8208 76 2604 35900 5240 150804 5240 9309 4434 0 98 0 2 0
13 6 327312 8092 76 2876 43788 5624 120172 5640 9220 3972 1 94 0 5 0
9 5 327156 8268 76 3188 41672 5416 144988 5416 9602 4268 0 96 0 4 0
11 7 326944 8332 76 2856 42608 6332 132128 6356 9086 4061 1 94 0 5 0
14 8 327288 8072 76 2880 40352 5416 114864 5436 9221 4477 0 75 0 25 0
10 4 327464 8128 76 2608 40860 5144 125752 5240 8501 4199 1 78 0 21 0
10 7 327468 8092 80 2652 38696 3784 137976 3828 8758 3978 0 98 0 2 0
11 9 327364 8224 80 2816 40768 3692 138048 3704 8603 3874 0 91 0 9 0
12 5 327492 8472 88 2620 45792 4200 146660 4244 9484 4267 1 94 0 5 0
8 4 327396 8264 88 2896 42648 5616 124336 5616 9898 4251 1 96 0 3 0
11 5 327352 8116 84 2752 35832 5616 124496 5640 8382 4228 0 92 0 8 0
10 7 327272 8148 88 2700 35352 6628 117264 6628 8418 4152 1 96 0 3 0

This simple experiment shows that:

1. Setting the memlock incorrectly makes all Hugepages not to be utilised by the database. This is just a waist of memory, no other process are able to use them. In case of memory shortage this could lead to the trashing of the system (as in our case, where its only 1GB of memory and 900MB of are Hugepages)

2. Also starting from 11.2.0.3 oracle should use all available Hugepages even if number of hugepaes configured are less then shole SGA. It looks like it is required that all configured hugepages should be able to be locked into memory.

3. Therefore, the proper memlock configuration is very important  and should be correlated to the number of hugepages and not to the total amount of physical memory.

Shell limits after 12c preinstall rpm

Just a quick note.

Recently I’ve been using oracle-rdbms-server-12cR1-preinstall package to perform all required pre-reqs for 12c.  This should install required packages, modify kernel parameters and user limits.
Before that I had custom shell limits, like number of open files set to 2048.
After the rpm execution, I’ve noticed that my user limits have changed

ulimit -n
1024

I expected that and went to limits.conf to change them back,
however when I looked at limits.conf all limits were set as before the package.

cat /etc/security/limits.conf | grep nofile
oracle soft nofile 2048
oracle hard nofile 65536

It appears that the rpm package places the following

/etc/security/limits.d/oracle-rdbms-server-12cR1-preinstall.conf

Which overrides the /etc/security/limits.conf
After changing the oracle-rdbms-server-12cR1-preinstall.conf, everything is back to expected..

ulimit -n
2048

CONCLUSION:
The  oracle-rdbms-server-12cR1-preinstall package places shell limits into /etc/security/limits.d directory. 

VirtualBox problem with host-only adapter VERR_INTNET_FLT_IF_NOT_FOUND

Recently I hit the following issue. Im installing Vbox on new machine and in order to configure the connectivity between host and guest os, we need to setup the host-only network and the adapter on the guest-os side.

However, after the vbox machine start I noticed the following error message.

Failed to open/create the internal network ‘HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter #2’ (VERR_INTNET_FLT_IF_NOT_FOUND).
Failed to attach the network LUN (VERR_INTNET_FLT_IF_NOT_FOUND).

Result Code: E_FAIL (0x80004005)
Component: ConsoleWrap
Interface: IConsole {872da645-4a9b-1727-bee2-5585105b9eed}

I was pretty sure that all config was right as I did it on other machine.
To keep story short it appears that network adapter on the host machine is not configured right,
The virtualbox NDIS6 bridged driver in not checked. I guess this happened after the vbox update to newer version

SOLUTION:
Please make sure that “VirtualBox NDIS6 Bridged driver is set’.
Disable/Enable the virtual network adapter in order to make the changes effective.

Grid Infrastructure redundant interconnect in 11.2.0.3

For many years Oracle Clusterware have suffered from lack of native support of cluster interconnect redundancy. In the same time we still could configure more than one network for RAC cache fusion. To prevent interconnects being single point of failure, we could use 3rd party clustering solutions for older versions of oracle, or just use solutions provided by OS like network cards teaming, this often brought additional cost and complexity to the solution.

Starting from Oracle Grid Infrastructure 11.2.0.2 few feature called HAIP  (High Availability Interconnect) been introduced, allowing us to configure up to 4 private networks for interconnect use. New crs resource ‘ora.cluster_interconnect.haip’ have appeared crs ‘kitchenware’ which uses multicast protocol for initial bootstrap and rac cache fusion. This puts additional requirement to the network because often the protocol is turned off at switch level. As an another requirement, each private network must sit in its own subnet, otherwise unplugging the cable will cause node eviction. 

We can configure up to 4 networks for interconnect, if we set more than one private network, lets say 2, oracle will assign and use 4 addresses anyway attached to 2 physical network interfaces. In following example I’ve created 3 interconnects initially, and then removed first (eth1) and third (eth3), leaving only eth2. Here we see all 4 addresses attached to the single net interface.

eth2      Link encap:Ethernet  HWaddr 08:00:27:AD:90:C2
          inet addr:10.0.1.1  Bcast:10.0.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2334553 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2305159 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2497624256 (2.3 GiB)  TX bytes:2474843374 (2.3 GiB)

eth2:1    Link encap:Ethernet  HWaddr 08:00:27:AD:90:C2
          inet addr:169.254.20.220  Bcast:169.254.63.255  Mask:255.255.192.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth2:2    Link encap:Ethernet  HWaddr 08:00:27:AD:90:C2
          inet addr:169.254.91.47  Bcast:169.254.127.255  Mask:255.255.192.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth2:3    Link encap:Ethernet  HWaddr 08:00:27:AD:90:C2
          inet addr:169.254.169.41  Bcast:169.254.191.255  Mask:255.255.192.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

eth2:4    Link encap:Ethernet  HWaddr 08:00:27:AD:90:C2
          inet addr:169.254.231.159  Bcast:169.254.255.255  Mask:255.255.192.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

In terms of RAC interconnect configuration, we will see all 4 addresses configured at database level. 

SQL>  select * from GV$CLUSTER_INTERCONNECTS where inst_id=1;

   INST_ID NAME            IP_ADDRESS       IS_ SOURCE
———- ————— —————- — ———————–
         1 eth1:1          169.254.20.220   NO
         1 eth2:1          169.254.91.47    NO
         1 eth3:1          169.254.169.41   NO
         1 eth1:2          169.254.231.159  NO


|The HAIP feature allow us to change IP of interconnect on the fly without impacting the service, as it was in previous versions (we needed to stop crs on both nodes to change interconnects). I’ve created small test case which generates lots of messages and block flying over interconnect. This was done by creating small table and updating same blocks, even rows from 1st instance and odd rows from 2nd instance. The test generated several hundred messages and blocks per second travelling over the interconnect network,  with load of about 4MB/sec. 

While the load was present, I removed 2 networks, leaving only one, then I added 2nd network and dropped first one. The interfaces wasn’t dropped immediately, few minutes passed until the relocation has occurred. According to  ‘gc blocks lost ‘/’gc current lost’ and other stats, no blocks dropped during the operation.

End of part I

Signal 11 failure on installing 11gR2 grid infrastructure 32 bit

Recent few weeks I’ve been installing oracle 11g r2 infrastructure (clusterware) 32 bit on vmware workstation. The installation process finished successfully and root.sh is launched, the following error appears:
“Failure with signal 11 from command: /u01/app/grid/bin/ocrconfig -local -upgrade oracle oinstall
Failed to create or upgrade OLR”

I have checked out many solutions, but with no success.

As a last chance, I’ve decided to install 64 grid infrastructure. Installation finished successfully with no errors. So, I believe, that ‘signal 11’ error comes from 32bit version.

Hope, I save some time for someone, trying to resolve problem like this.

Unable to install EM GridControl 10.2.0.3 oms on rhel5 (all 64bit)

Ive experienced problem with installing em gridcontrol oms 10.2.0.4 64bit on redhat linux 5 64bit. Configuration assistant was unable to start web server with error finding libdb.so.2 library.

We have attemted to install different libs with no success. After few hours of experiments we have decided to use rhel4. It seems that rhel 5 is not supported yet.