Tuesday, January 31, 2017

connection management call elapsed time Wait event on oracle DB

We have instance name db1 on the exadata server. This db is being monitored by platinum gateway as we have platinum support for exadata. Because we have done a lot of testing as poc and now its time to make it production i decided to drop and rebuild the database.

After i rebuilt it, In oem i started seeing lot of concurrency waits in brown color all i can get info from oem is library cache lock and most of the sessions having this lock are related to sys user.

So i want to find out whats actually causing the issue and generated a awr report and sure enough i found the same library cache lock here.

I ran ADDM report and it found the same concurrency wait event consuming 40% of time and it said might be related to shared pool size and there are no recommendations.

So i dig deep into AWR report and found a interesting wait event on top

connection management call elapsed time

Statistic Name Time (s) % of DB Time
connection management call elapsed time 19,275.19 99.37

Looks like most of it is because of this. So i decided some application is constantly trying to connect to the db and might be giving wrong password.

In our database if any user inputs incorrect password for 5 times account gets locked except for application and monitoring profile users. So i thought to start with dbsnmp user.

I just thought of checking dba_audit_trail and found orarom user ( which is a readonly account for platinum support) is contsantly trying to login and getting ORA-01017. I got this info from audit trail.

This is because when i created the new db i used a new password for dbsnmp. So i logged into platinum support gateway and changed the password. Immediately that wait event is gone.

Monday, January 30, 2017

Setting Hugepages on Exadata


Get the script for MOS support

Oracle Linux: Shell Script to Calculate Values Recommended Linux HugePages / HugeTLB Configuration (Doc ID 401749.1)

Start all DB instances on the server

Now run the script from MOS support. It will give you a recommended value. Now configure hugepages for Linux

You need to be root to do this. If this is the first time you are configuring you might need to reboot the server.

Note: Huge pages value should be greater than or equal to all the SGA's combined on the server.

You can check the hugepages config using

$ grep Huge /proc/meminfo

Now run the following as root and enter the recommended value you got from the script.

# vi /etc/sysctl.conf

vm.nr_hugepages=55425

Save it and quit.

Now run

# sysctl -p

Now verify the changes using

$ grep Huge /proc/meminfo

If you dont see any changes you will need to reboot.

While modifying this also verify hard limit and soft limit in the following file

/etc/security/limits.conf

oracle    soft     memlock 237674430
oracle    hard     memlock 237674430

Now set the following in individual db instances

Note: It is a good idea to set value to TRUE because DB will fail to start if you configure only and enough memory is not configured in Huge pages.

We have USE_LARGE_PAGES parameter on the databases. Setting this parameter will improve memory management on the database.

We have 3 values for the above parameter TRUE, FALSE and ONLY

If you set to TRUE, DB will use huge pages and will be started even if hugepages values is less than SGA of database.
If you set to FALSE, DB will not use huge pages
If you set to ONLY, DB will fail to start if huge pages cannot be used for entire SGA of database. 

Tuesday, January 24, 2017

ORA-24005 Inappropriate utilities used to perform DDL on AQ table

You might get this error while dropping a schema or tablespace or objects it self.

You need to drop using sys user with following syntax

execute DBMS_AQADM.DROP_QUEUE_TABLE (queue_table => 'kmaedq_edqstaging.AQ$_EDQ_TOPIC_TAB_G', force => true);


Just mention the tablename in queue_table including schema name.

 Now you can drop the object

Friday, January 20, 2017

Migrating database using expdp and impdp checklist

Expdp and Impdp database migration checklist

On source

Purge dba_recyclebin;

expdp \'/ as sysdba\' directory=nfs dumpfile=full_bkp_%U.dmp full=y parallel=8

Use toad to get the ddl for all the roles and users.

Comment out all unnecessary stuff except grants. We will run this grants script at the end of import on Target.

On Target

Create target DB.

Make sure you have same character set.

Create all the tablespaces as source DB.

Create all the roles as source.

Create all the profiles as source.

Make sure you have same temp tablespace name as source.

Once the export is completed. Copy it to the corresponding location and start import. Its better to use nfs storage so that you can skip copying.

Import only the necessary schemas and exclude all the sys related schemas.

Make sure you have a log file assigned to each schema import.

Easier way to do is create script like this on your server and run in nohup so it will run in background.

vi import.sh

#!/bin/sh

export ORACLE_HOME=/u01/app/oracle/product/11.2.0.4/dbhome_1
export ORACLE_SID=skwh1

impdp \'/ as sysdba\' directory=nfs dumpfile=full_bkp_%U.dmp parallel=8 schemas=x1 table_exists_action=replace logfile=x1_import.log
impdp \'/ as sysdba\' directory=nfs dumpfile=full_bkp_%U.dmp parallel=8 schemas=x2 table_exists_action=replace logfile=x2_import.log
impdp \'/ as sysdba\' directory=nfs dumpfile=full_bkp_%U.dmp parallel=8 schemas=x3 table_exists_action=replace logfile=x3_import.log


nohup ./import.sh &

Once everything is imported. Verify the logs.

You might see a lot of errors related to grants. Please ignore as we will run the scripts we captured from source.

Try to troubleshoot the errors.

Now run role grants script and user grants script.

If any materialized views failed to create as part of import. Re run now by choosing table_exists_action=skip so you dont need to import whole table again.

After solving all the issues. Go to $ORACLE_HOME/rdbms/admin and run @utlrp.sql

This should compile all invalid objects.



Thursday, January 19, 2017

Changing Oracle Database characterset

Here is how to change the DB characterset

SELECT * FROM V$NLS_PARAMETERS WHERE PARAMETER LIKE '%CHARACTERSET%';

select * from V$nls_parameters where parameter in ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');


or

select * from nls_database_parameters;

SHUTDOWN IMMEDIATE;

STARTUP RESTRICT;

ALTER DATABASE CHARACTER SET INTERNAL_USE AL32UTF8;

ALTER DATABASE CHARACTER SET AL32UTF8;

SHUTDOWN IMMEDIATE;

STARTUP;

SELECT * FROM V$NLS_PARAMETERS WHERE PARAMETER LIKE '%CHARACTERSET%';


For RAC

You need to stop the database. 

Start in single instance

alter system set cluster_database= false scope=spfile;

shut immediate;

startup restrict;

and follow above steps and change cluster_database=true once done and start db using srvctl normally


*********************************************************************

some times you might need to change ncharset of database.. follow steps below

Select property_value from database_properties
     where upper(property_name) = 'NLS_NCHAR_CHARACTERSET';

SHUTDOWN IMMEDIATE
STARTUP MOUNT
ALTER SYSTEM ENABLE RESTRICTED SESSION;
ALTER DATABASE OPEN;
ALTER DATABASE NATIONAL CHARACTER SET UTF8;
SHUTDOWN IMMEDIATE
STARTUP

It is always not better choice to use 'ALTER DATABASE' command to change the character set.

Make sure to take a full backup before doing this


Featured Post

Apply Patch 22191577 latest GI PSU to RAC and DB homes using Opatch auto or manual steps

Patch 22191577: GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.4.160119 (JAN2016) Unzip the patch 22191577 Unzip latest Opatch Version in or...