OUI does not recognize ASM disks in 11.2

Oracle Universal Installer (OUI) does not recognize ASM disks during the grid infrastructure installation.

Cause: Oracleasm (ASMLIB) was configured for user root instead of OS user that owns the Grid Infrastructure installation.
Configure oracleasm for OS user that owns the Grid Infrastructure installation.

E.g. if that user was oracle in dba group, as root run

/etc/init.d/oracleasm configure

and
answer oracle to ‘Default user to own the driver interface’.
answer dba to ‘Default group to own the driver interface’
answer y to Start ‘Oracle ASM library driver on boot’
answer y to ‘Fix permissions of Oracle ASM disks on boot’

[root@myrac1 logs]# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets (‘[]’).  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface [grid]:
Default group to own the driver interface [asmadmin]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver:                     [  OK  ]
Scanning the system for Oracle ASMLib disks:               [  OK  ]

Resolved.

Also, make sure the following on all rac nodes:

[root@myrac1 ~]# id grid
uid=1100(grid) gid=1000(oinstall) groups=1000(oinstall),1200(asmadmin),1201(asmdba),1202(asmoper)

[root@myrac11 ~]# id oracle
uid=1101(oracle) gid=1000(oinstall) groups=1000(oinstall),1201(asmdba),1300(dba),1301(oper)

Unable to instantiate ASM disk

This applies to all release any platform.

Problem while instantiating the ASM disks when doing scandisk on second node in the cluster.

[root@myrac2 ~]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks…
Scanning system for ASM disks…
Instantiating disk “OCR”
Unable to instantiate disk “OCR”
Instantiating disk “VD”
Unable to instantiate disk “VD”
Instantiating disk “DATA”
Unable to instantiate disk “DATA”
Instantiating disk “FRA”
Unable to instantiate disk “FRA”

Solution:

[root@myrac2 ~]# /usr/sbin/oracleasm configure
ORACLEASM_ENABLED=false
ORACLEASM_UID=
ORACLEASM_GID=
ORACLEASM_SCANBOOT=true
ORACLEASM_SCANORDER=””
ORACLEASM_SCANEXCLUDE=””
[root@myrac2 ~]# /usr/sbin/oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets (‘[]’). Hitting ; without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done
[root@ms2rac2 ~]# ls -ltr /etc/sysconfig/oracleasm
lrwxrwxrwx 1 root root 24 Dec 11 2008 /etc/sysconfig/oracleasm ->;;; oracleasm-_dev_oracleasm

[root@myrac2 ~]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks…
Scanning system for ASM disks…
Instantiating disk “OCR”
Instantiating disk “VD”
Instantiating disk “DATA”
Instantiating disk “FRA”

[root@myrac2 ~]# /usr/sbin/oracleasm listdisks
DATA
FRA
OCR
VD

If you still get the following error:

[root@ms2rac2 ~]# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks…
Cleaning disk “FRA”
Cleaning disk “OCR”
Cleaning disk “VD”
Scanning system for ASM disks…
Instantiating disk “CRS”
Instantiating disk “FRA”
Unable to fix permissions on ASM disk “CRS”

Then check the grid and oracle user definition on all nodes.

id oracle

id grid

Check,

ORACLEASM_UID=grid
ORACLEASM_GID=asmadmin

If anything is missing, create it and rescan the disks.

If you are not able to see your ASM disks using oracleasm listdisks after doing oracleasm scandisks then try the following:

Do a listing of the device using ls – ltr command on all the nodes in the cluster.

ls -ltr /dev/device-name

Now scan the problem device(remember at this stage, the ASM disk was created using createdisk command)

/usr/sbin/oracleasm scandisks -v /dev/device-name

This will instantiate the disk.

Note the device name is the one used when creating ASM disk using oracleasm createdisk command.

Now try the listdisks command and it will list the ASM disk ready for creating ASM disk groups using asmca utility.

You can also try
/usr/sbin/oracleasm-discover command to discover all the configured ASM disks.

Lastly if you get the cannot instantiate message during asmdisk creation, check /etc/sysconfig/selinux file and set the below

SELINUX=disabled

Now restart the server.

Cannot complete applications logon. You may have entered an invalid applications password, or there may have been a database connect error.

This applies to Oracle Applications 11.5.10.2

Resolved.

Node ID does not exist for the current application server ID

This applies to Oracle Applications version 11.5.10.2 or higher

resolved

User equivalence check failed

I could not get the user equivalence check to work on my Solaris 10 server when trying to install 11gR2 Grid.

No issues were encountered during the install.

<< Message: Result: User equivalence check failed for user “grid”. >>

Cluvfy and the OUI tries to find SSH on Solaris at /usr/local/bin.

Workaround is to create a softlink from /usr/bin/ssh to /usr/local/bin.

Note: User equivalence is required for installations (IE using OUI) and patching. DBCA, NETCA, and DBControl also require user equivalence.

Disable DBMS Scheduler jobs on startup

My intention is to disable all dbms jobs prior opening the database.

Set job_queue_processes = 0 in pfile. Create the spfile on ASM or NFS.

Duplicate Database With RMAN Without Connecting To Target Database [ID 732624.1].

Manually, do

  1. Rman Controlfile restore
  2. Perform Manual restore and recovery

Startup the database in restricted mode (resetlogs upgrade)

alter database open resetlogs upgrade;

exec dbms_scheduler.set_scheduler_attribute('SCHEDULER_DISABLED','TRUE');

Also, run  the output of the following SQL to disable jobs explicitly:

SELECT 'EXEC dbms_scheduler.disable('||chr(39)|| owner ||'."'|| job_name || '"'||chr(39) ||',TRUE);'
     from dba_scheduler_jobs
     where owner not in ('SYS','SYSTEM','EXFSYS','ORACLE_OCM') order by owner;

Note that only connect AS SYSDBA is allowed when OPEN in UPGRADE mode. hence do the following:

@$ORACLE_HOME/rdbms/admin/utlirp.sql

@$ORACLE_HOME/rdbms/admin/utlrp.sql

shutdown immediate;

startup;
If startup do not happen due the error in identifying the controlfile, then

  1. check the new controfile names that has been created,
  2. update them in the PFILE,
  3. startup database using PFILE
  4. Recreate SPFILE
  5. shutdown immediate
  6. startup

Optional Steps:

  1. Rename the database using NID in the end, if required.
  2. Disable the archive log mode

Cheers. Have fun.

IPMP and 11gR2 Grid Infrastructure on Sun Solaris 10

11.2.0.1 clusterware fails to start or evict if IPMP is used for public or private network. This happened when I ran the root script. // <![CDATA[// Node eviction can also happen when IPMP fails over private IP from active NIC to other NIC in same group.

bug 9260196 is fixed in 11.2.0.2; for 11.2.0.1, fix is available in patch 9729439.  Applied this patch and re ran the root scripts.

Oracle Grid Infrastructure 11.2.0.1 does not support native multiple redundant network support. Therefore, an external redundancy mechanism must be used. Oracle Solaris provides two solutions: Trunking (Link Aggregation based) and IPMP (IP Multipathing).

Refer metalink note 1069584.1, 1067353.1