Wednesday, November 14, 2018

OEM 12c Unable To Complete Network Operation Against My Oracle Support

cd /opt/app/oracle/gc_inst/user_projects/domains/GCDomain/bin/


cp setDomainEnv.sh setDomainEnv.sh.old

Stop the OMS and web tier:

emctl stop oms



vi setDomainEnv.sh


Find

JAVA_OPTIONS="${JAVA_OPTIONS} ${JAVA_PROPERTIES} -Dwlw.iterativeDev=${iterativeDevFlag} -Dwlw.testConsole=${testConsoleFlag} -Dwlw.logErrorsToConsole=${logErrorsToConsoleFlag}"

Change to:

JAVA_OPTIONS="${JAVA_OPTIONS} ${JAVA_PROPERTIES} -Dwlw.iterativeDev=${iterativeDevFlag} -Dwlw.testConsole=${testConsoleFlag} -Dwlw.logErrorsToConsole=${logErrorsToConsoleFlag} -Dcom.sun.net.ssl.enableECC=false"

Restart the OMS and web tier.:

emctl start oms


rollback for



Fix MOS oracle unable to connect on hisipoem


/opt/app/oracle/gc_inst/user_projects/domains/GCDomain/bin/


cp setDomainEnv.sh.old setDomainEnv.sh

Wednesday, September 26, 2018

sql profile syntax for force match

execute dbms_sqltune.accept_sql_profile(task_name =>'H43369.4496173495', task_owner => 'xxxx',replace => TRUE, force_match => TRUE);

Thursday, September 20, 2018

Find lost weblogic domain password for OEM 12c

To find weblogic password follow below steps

First go to below location

/opt/app/oracle/middlewarer2/oracle_common/common/bin



**********************************************************************************************
create a file name with decrypt.py

vi decrypt.py

#/bin/python
#=====================================================================
#
# $Id: decrypt.py $
#
# PURPOSE:    Script to decrypt any Password or Username
#             within a WebLogic Server Domain
#
# PARAMETERS: none
#
# NOTES:      none
#
# AUTHOR:     Dirk Nachbar, https://dirknachbar.blogspot.com
#
# MODIFIED:
#
#
#=====================================================================

# Import weblogic.security.internal and weblogic.security.internal.encryption
from weblogic.security.internal import *
from weblogic.security.internal.encryption import *

# Provide Domain Home Location
domain = raw_input("Provide Domain Home location: ")

# Get encryption service with above Domain Home Location
encryptService = SerializedSystemIni.getEncryptionService(domain)
clearOrEncryptService = ClearOrEncryptedService(encryptService)

# Provide the encrypted password or username, e.g. from boot.properties
encrypted_pwd = raw_input("Provide encrypted password or username (e.g.: {AES}jNdVLr...): ")

# Clear the encrypted value from escaping characters
cleared_pwd = encrypted_pwd.replace("\\", "")

# Personal security hint :-)
raw_input("Make sure that nobody is staying behind you :-) Press ENTER to see the password ...")

# Decrypt the encrypted password or username
print "Value in cleartext is: " + clearOrEncryptService.decrypt(cleared_pwd)



#END

**********************************************************************************************


You can get the encrypted password and username from below location


cd /opt/app/oracle/middlewarer2/gc_inst/user_projects/domains/GCDomain/servers/EMGC_ADMINSERVER/security

cat boot.properties
#Generated by Configuration Wizard on Sat Oct 20 10:20:22 PDT 2012
username={AES}tFHZTUTQ5xhgS3WQicmnSPgTfFv1xswVwndUaCeb7qk=
password={AES}QZ06i3z6EAoK82eb20n0dZr+xYAxUAk26HtgmOt4Pp0=

**********************************************************************************************

export DOMAIN_HOME=/opt/app/oracle/middlewarer2/gc_inst/user_projects/domains/GCDomain

cd /opt/app/oracle/middlewarer2/oracle_common/common/bin

Make sure decrypt.py has execute permission

./wlst.sh decrypt.py

Provide Domain Home location: /opt/app/oracle/middlewarer2/gc_inst/user_projects/domains/GCDomain
Provide encrypted password or username (e.g.: {AES}jNdVLr...): {AES}tFHZTUTQ5xhgS3WQicmnSPgTfFv1xswVwndUaCeb7qk=

Now you will get password in cleartext





Tuesday, August 21, 2018

12CR2 Grid and DB patching ( RAC and NON RAC)

Opatch Grid home owned by oracle:dba

Opatch db home owned by oracle:dba

Patch folder unzipped by oracle

opatch auto to be run by root


****************************

Pre req check

****************************

For Grid Infrastructure Home, as home user:

export ORACLE_HOME=/opt/app/crs/12.2.0.1/

$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /export/home/oracle/oracle_software/12c/patches/28183653/28163133
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /export/home/oracle/oracle_software/12c/patches/28183653/28163190
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /export/home/oracle/oracle_software/12c/patches/28183653/28163235
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /export/home/oracle/oracle_software/12c/patches/28183653/26839277
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /export/home/oracle/oracle_software/12c/patches/28183653/27144050



For Database home, as home user:

export ORACLE_HOME=/opt/app/oracle/product/12.2.0/12.2.0.1


$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /export/home/oracle/oracle_software/12c/patches/28183653/28163133
$ORACLE_HOME/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -phBaseDir /export/home/oracle/oracle_software/12c/patches/28183653/28163190



****************************

Check System space before applying

****************************


For Grid Infrastructure Home, as home user:


vi /tmp/patch_list_gihome.txt

/export/home/oracle/oracle_software/12c/patches/28183653/28163133
/export/home/oracle/oracle_software/12c/patches/28183653/28163190
/export/home/oracle/oracle_software/12c/patches/28183653/28163235
/export/home/oracle/oracle_software/12c/patches/28183653/26839277
/export/home/oracle/oracle_software/12c/patches/28183653/27144050


Run the opatch command to check if enough free space is available in the Grid Infrastructure Home:

export ORACLE_HOME=/opt/app/crs/12.2.0.1/

$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_gihome.txt



For Database home, as home user:


vi /tmp/patch_list_dbhome.txt
/export/home/oracle/oracle_software/12c/patches/28183653/28163133
/export/home/oracle/oracle_software/12c/patches/28183653/28163190


Run opatch command to check if enough free space is available in the Database Home:

export ORACLE_HOME=/opt/app/oracle/product/12.2.0/12.2.0.1

$ORACLE_HOME/OPatch/opatch prereq CheckSystemSpace -phBaseFile /tmp/patch_list_dbhome.txt


****************************

Applying patch

****************************

As root user

export ORACLE_HOME=/opt/app/crs/12.2.0.1/

export PATH=$PATH:/opt/app/crs/12.2.0.1/OPatch



For RAC (GRID and DB)


cd /opt/app/crs/12.2.0.1/OPatch/

./opatchauto apply /export/home/oracle/oracle_software/12c/patches/28183653/

FOR GRID and NON RAC

GRID:


cd /opt/app/crs/12.2.0.1/OPatch

./opatchauto apply /export/home/oracle/oracle_software/12c/patches/28183653/ -oh /opt/app/crs/12.2.0.1/


NON - RAC DB


cd /export/home/oracle/oracle_software/12c/patches/28183653/28163133

export ORACLE_HOME=/opt/app/oracle/product/12.2.0/12.2.0.1

$ORACLE_HOME/OPatch/opatch apply

Thursday, August 16, 2018

undo tablespace usage based on user

SELECT s.inst_id,
        r.name                   rbs,
        nvl(s.username, 'None')  oracle_user,
        s.osuser                 client_user,
        p.username               unix_user,
        to_char(s.sid)||','||to_char(s.serial#) as sid_serial,
        p.spid                   unix_pid,
        TO_CHAR(s.logon_time, 'mm/dd/yy hh24:mi:ss') as login_time,
        t.used_ublk * 8192  as undo_BYTES,
                st.sql_text as sql_text
   FROM gv$process     p,
        v$rollname     r,
        gv$session     s,
        gv$transaction t,
        gv$sqlarea     st
  WHERE p.inst_id=s.inst_id
    AND p.inst_id=t.inst_id
    AND s.inst_id=st.inst_id
    AND s.taddr = t.addr
    AND s.paddr = p.addr(+)
    AND r.usn   = t.xidusn(+)
    AND s.sql_address = st.address
 --   AND t.used_ublk * 8192 > 10000
  AND t.used_ublk * 8192 > 1073741824
  ORDER
       BY undo_BYTES desc
/



Wednesday, August 1, 2018

Gather Incremantal statistics for large or partitioned tables


BEGIN
  DBMS_STATS.SET_TABLE_PREFS (  
      ownname  =>  'HMA_TM_PROD_GEN2_MONITOR'
,     tabname  =>  'ZZT_SERVICE_LOG'
,     pname    =>  'INCREMENTAL'
,     pvalue   =>  'true'
);
END;

--This should return True. This mean incremental is enabled
select dbms_stats.get_prefs('INCREMENTAL','HMA_TM_PROD_GEN2_MONITOR','ZZT_SERVICE_LOG') from dual;

BEGIN
  DBMS_STATS.GATHER_TABLE_STATS (
      ownname  => 'HMA_TM_PROD_GEN2_MONITOR'
,     tabname  => 'ZZT_SERVICE_LOG'
);
END;
/

Monday, June 4, 2018

[BUG] root.sh fails while installing 12.2.0.1 grid

DNS errors during the root.sh



Environment :-
=============
Oracle :- 12.2 Cluster Installation
Operating System :- Solaris Sparc 11 – 64bit

Issue :-
=======
root.sh Fails With CLSRSC-175 Error For Grid Iinfrastructure 12.2.0.1 on Solaris for hard DNS check and expects results from DNS to exactly match the FQDN.

Oracle Bug :-
===========
Bug 26002739 – SOLARIS:ROOTUPGRADE.SH FAILING DUE HARD DNS CHECK

Error when root.sh was execueted.
===============================

2018/05/02 10:24:39 CLSRSC-175: Failed to write the checkpoint ‘ROOTCRS_STACK’ with status ‘START’ (error code 1)
2018/05/02 10:24:41 CLSRSC-175: Failed to write the checkpoint ‘ROOTCRS_AFDINST’ with status ‘START’ (error code 1)
2018/05/02 10:24:49 CLSRSC-175: Failed to write the checkpoint ‘ROOTCRS_AFDINST’ with status ‘SUCCESS’ (error code 1)
2018/05/02 10:24:49 CLSRSC-175: Failed to write the checkpoint ‘ROOTCRS_AFDINST’ with status ‘SUCCESS’ (error code 1)
2018/05/02 10:24:51 CLSRSC-175: Failed to write the checkpoint ‘ROOTCRS_STACK’ with status ‘FAIL’ (error code 1)

Other logs under cfgtoollogs folder
cluutil8.log
.
.
[main] [ 2018-05-02 09:57:19.314 PDT ] [ClusterwareCkpt.:162] UnknownHostException caught
[main] [ 2018-05-02 09:57:19.315 PDT ] [ClusterUtil.main:338] nettle: DNS name not found [response code 3]

Oracle Support Suggestion:-
==========================

The issue look to be caused by below bug, which enforces hard DNS check and expects results from DNS to exactly match the FQDN.

Bug 26002739 – SOLARIS:ROOTUPGRADE.SH FAILING DUE HARD DNS CHECK

Please attempt the below workaround and provide feedback:

1) Take backup of “GRID_HOME/bincluutil” & “GRID_HOME/crs/sbs/cluutil.sbs” “file

2) Edit both files and make following change ( under SunOS block )

Comment :: JRE_OPTIONS=”-d64 -Dsun.net.spi.nameservice.provider.1=dns,sun -Xms256m -Xmx512m”

Add new line right below commented text :: JRE_OPTIONS=”-d64 -Xms256m -Xmx512m”

===== updated SunOS block should look like ::

SunOS)
LD_LIBRARY_PATH_64=$ORACLE_HOME/lib:$ORACLE_HOME/srvm/lib:$LD_LIBRARY_PATH_64
export LD_LIBRARY_PATH_64
LD_LIBRARY_PATH=$ORACLE_HOME/lib:$ORACLE_HOME/srvm/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
# JRE_OPTIONS=”-d64 -Dsun.net.spi.nameservice.provider.1=dns,sun -Xms256m -Xmx512m”
JRE_OPTIONS=”-d64 -Xms256m -Xmx512m”
;;

======

3) Then execute root.sh from 12.2 GI home and share result.

After the workaround root.sh was successful.

See the below reference note :-

Please try the workaround given in the bug or use the patch p26002739_122010_SOLARIS64.zip

Ref: root.sh or rootupgrade.sh Fails With CLSRSC-175 Error For Grid Iinfrastructure 12.2.0.1 on Solaris ( Doc ID 2285577.1 )

How to Apply a Grid Infrastructure Patch Before root script (root.sh or rootupgrade.sh) is Executed? ( Doc ID 1410202.1 )

Hope this helps.

Monday, May 21, 2018

3 Node RAC Dataguard

standby

Create temporary listener

LISTENER_TEMP=
      (DESCRIPTION=
       (ADDRESS_LIST=
        (ADDRESS= (PROTOCOL=TCP)(HOST=exadb01)(PORT=1525))
      )
     )

SID_LIST_LISTENER_TEMP =
    (SID_LIST=
     (SID_DESC=
     (SID_NAME=apple1)
     (ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1)
     )
    )
******************************************************************************
Primary

Add tns entries of source and target on all the 3 nodes.

apple =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = exadb10)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = apple)
    )
  )

appleDR =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = exadb01)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = apple1)
    )
  )
******************************************************************************
Standby 

Add tns entries of source and target on all the 3 nodes.

apple =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = exadb10)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = apple)
    )
  )

appleDR =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = exadb01)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = apple1)
    )
  )
******************************************************************************
Primary
ALTER DATABASE FORCE LOGGING;
alter system set log_archive_max_processes=4 scope=both sid='*';
alter system set LOG_ARCHIVE_DEST_1='LOCATION=/oracle/archive/apple VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=apple' scope=both sid='*';
alter system set LOG_ARCHIVE_DEST_2='SERVICE=appledr VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=apple' scope=both sid='*';
alter system set LOG_ARCHIVE_DEST_STATE_1='ENABLE' scope=both sid='*';
alter system set LOG_ARCHIVE_DEST_STATE_2='ENABLE' scope=both sid='*';
alter system set STANDBY_FILE_MANAGEMENT='AUTO'scope=both sid='*';
alter system set FAL_SERVER='appleDR'scope=both sid='*';
alter system set FAL_CLIENT='apple'scope=both sid='*';
alter system set log_archive_config='dg_config=(apple,apple)' scope=both sid='*';


create pfile from spfile;

alter user sys identified by "xxxx" account unlock;

orapwd file=orapwapple1 password=xxxx entries=5 force=y

scp  initapple1.ora oracle@exadb01:/opt/app/oracle/product/11.2.0.4/dbs/initapple1.ora



scp  orapwapple1 oracle@exadb01:/opt/app/oracle/product/11.2.0.4/dbs/orapwapple1
scp  orapwapple1 oracle@exadb02:/opt/app/oracle/product/11.2.0.4/dbs/orapwapple2
scp  orapwapple1 oracle@exadb03:/opt/app/oracle/product/11.2.0.4/dbs/orapwapple3


******************************************************************************
Standby

At this point Parameter file on standby server will have all parameters including RAC

Rename the init file as below to differentiate

cp initapple1.ora initapple1.ora.rac

Now edit the initapple1.ora and remove all the RAC related parameters which includes below

instance related, cluster_database, threads, undo etc.. Keep the apple1_xxx parameters as is.

Modify the below parameters

*.LOG_ARCHIVE_DEST_1='LOCATION=+RECO VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=apple'
*.LOG_ARCHIVE_DEST_2='SERVICE=apple VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=apple'
*.LOG_ARCHIVE_DEST_STATE_1='ENABLE'
*.LOG_ARCHIVE_DEST_STATE_2='ENABLE'
*.FAL_SERVER='apple'
*.FAL_CLIENT='appledr'
*.control_files='+DATA/apple/controlfile/control01.ctl','+REDO/apple/controlfile/control02.ctl','+RECO/apple/controlfile/control03.ctl'
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+REDO'

Create respective directories here as per pfile

. oraenv=apple1

startup nomount

******************************************************************************

Primary

rman target sys/xxxx@apple auxiliary sys/xxxx@appleDR


duplicate target database for standby from active database dorecover;

After finish exit rman console

******************************************************************************
Standby

create spfile from the rac pfile as below

create spfile='+CRS/apple/parameterfile/spfileapple.ora' from pfile='/opt/app/oracle/product/11.2.0.4/dbs/initapple1.ora.rac';

shut immediate;

vi initapple1.ora
spfile='+crs/apple/parameterfile/spfileapple.ora'

scp  initapple1.ora oracle@exadb02:/opt/app/oracle/product/11.2.0.4/dbs/initapple2.ora
scp  initapple1.ora oracle@exadb03:/opt/app/oracle/product/11.2.0.4/dbs/initapple3.ora

******************************************************************************
Standby

Adding the 3 instances to cluster


srvctl add database -d apple -o /opt/app/oracle/product/11.2.0.4 -p +CRS/apple/parameterfile/spfileapple.ora -r physical_standby -a DATA,REDO,RECO,CRS

srvctl add instance -d apple -i apple1 -n exadb01
srvctl add instance -d apple -i apple2 -n exadb02
srvctl add instance -d apple -i apple3 -n exadb03

srvctl start database -d apple

srvctl status database -d apple

srvctl config database -d apple

******************************************************************************
Standby
After verifying the config stop the database on all the 3 instances and start on a single instance.

srvctl start instance -d apple -i apple1

sqlplus / as sysdba

start MRP process

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

******************************************************************************


Note

To stop MRP process use

alter database recover managed standby database cancel;
******************************************************************************

To very dataguard sync use below commands

Standby

select thread#,max(sequence#) from v$archived_log where applied='YES' group by thread#;

PRIMARY

select thread#,max(sequence#) from v$archived_log group by thread#;

or

select max(sequence#) from v$archived_log where  resetlogs_change#=(SELECT resetlogs_change# FROM v$database)

******************************************************************************








Wednesday, January 31, 2018

Drop database links from other schemas using sys

Create or replace procedure DROP_LINK(schemaName varchar2, dbLink varchar2 ) is
            plsql   varchar2(1000);
            cur     number;
            uid     number;
            rc      number;
    begin
            select
                    u.user_id into uid
           from    dba_users u
           where   u.username = schemaName;
             plsql := 'drop database link "'||dbLink||'"';
             cur := SYS.DBMS_SYS_SQL.open_cursor;
             SYS.DBMS_SYS_SQL.parse_as_user(
                   c => cur,
                   statement => plsql,
                   language_flag => DBMS_SQL.native,
                   userID => uid
          );
             rc := SYS.DBMS_SYS_SQL.execute(cur);

             SYS.DBMS_SYS_SQL.close_cursor(cur);
   end;
   /
 
 exec Drop_LINK( 'WGPOMSUSR', 'DBNAME' );

 select * from dba_db_links;

 drop procedure Drop_DbLink;

Featured Post

Apply Patch 22191577 latest GI PSU to RAC and DB homes using Opatch auto or manual steps

Patch 22191577: GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.4.160119 (JAN2016) Unzip the patch 22191577 Unzip latest Opatch Version in or...