Monday, March 21, 2016

Partitioning existing tables in oracle database

Partitioning tables


If we have an existing table that need to be partitioned here are steps to follow

Consider a non partitioned table testx which has around 500000 rows

We can partition table based on requirements

Say we have a column called date which is in this format 20160401000000

Format is yyyymmddhhmmss


I will create a new table testx_partition with the following syntax. While creating table that will include partition name with a maxvalue clause. Here i chose partition name as TESTX_MAX

CREATE TABLE SVADAPALLI.TESTX_PARTITION
(
  RDR_KEY                         NUMBER,
  VEHICLE_KEY                     NUMBER,
  DATE                            NUMBER,
  VEHICLE_ID                      NUMBER,
)
TABLESPACE USERS
PCTUSED    0
PCTFREE    10
INITRANS   1
MAXTRANS   255
STORAGE    (
            INITIAL          80K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
LOGGING
NOCOMPRESS
NOCACHE
NOPARALLEL
MONITORING
PARTITION BY RANGE (DATE)
(PARTITION TESTX_MAX VALUES LESS THAN (MAXVALUE));

Once the empty table is created, The next step is to move the data from original table to partition table

This is called partition exchange

ALTER TABLE testx_partition
  EXCHANGE PARTITION testx_max
  WITH TABLE testx
  WITHOUT VALIDATION;
 
 
  Once the above is done all values from original table will be moved to the new table.
 
 
  Now drop the original table and rename the new table.
 
  Drop table testx cascade;
 
  rename testx_partition to testx;
 
 
  Once this is done your new table will now have all your data in TESTX_MAX partition.
 
  Now that you have your data, you can easily split partition and create new partitions depending on your requirements. In my case i will do it monthly here.
 
  ALTER TABLE TESTX
 SPLIT PARTITION TESTX_MAX AT
 ('20160201000000')
 INTO (PARTITION TESTX_JAN
       TABLESPACE USERS
       PCTFREE    10
       INITRANS   1
       MAXTRANS   255
       STORAGE    (
                   MAXSIZE          UNLIMITED
                   BUFFER_POOL      DEFAULT
                   FLASH_CACHE      DEFAULT
                   CELL_FLASH_CACHE DEFAULT
                  ),
       PARTITION TESTX_MAX)
  UPDATE GLOBAL INDEXES;

OEM 12c java.lang.OutOfMemoryError: Java heap space

agent bin directory: /opt/app/oracle/agentHome/core/12.1.0.2.0/bin

agent logs directory: /opt/app/oracle/agentHome/agent_inst/sysman/log

cat gcagent.log

and error looks like below

2016-03-21 10:35:47,412 [5421649:68EFA393:CRSeOns] INFO - uncaughtException handler with
java.lang.OutOfMemoryError: Java heap space

2016-03-21 10:35:50,446 [1:main] ERROR - agent main throw an error
java.lang.OutOfMemoryError: Java heap space

[153032:GC.Executor.70250] ERROR - Critical error:
oracle.sysman.gcagent.task.TaskZombieException: task declared as a zombie

emd properties directory: /opt/app/oracle/agentHome/agent_inst/sysman/config



1) Stop the agent:
./emctl stop agent

2) Back up the emd.properties file[../agent_inst/sysman/config] and edit as follows:

a) Add these lines:

_zombieSuspensions=true
_canceledThreadWait=210

b) Change the indicated line as follows:

from :
agentJavaDefines=-Xmx128M -XX:MaxPermSize=96M

to:
agentJavaDefines=-Xmx512M -XX:MaxPermSize=96M


3) Start the agent:
./emctl start agent

Friday, March 18, 2016

Lock and unlock table stats

Some times table statistics might be locked.

If you try to gather stats in lock  state you will get error like this

ORA-20005: object statistics are locked (stattype = ALL)
ORA-06512: at "SYS.DBMS_STATS", line 24281
ORA-06512: at "SYS.DBMS_STATS", line 24332
ORA-06512: at line 2


So to unlock it here is the query you need to use

exec dbms_stats.unlock_table_stats('HMA_TM_PROD_GEN2_MONITOR','ZZT_SERVICE_LOG');

Now stats gathering will go fine.

To relock again

exec dbms_stats.lock_table_stats('HMA_TM_PROD_GEN2_MONITOR','ZZT_SERVICE_LOG');


Unlock all table stats in schema

select 'exec dbms_stats.unlock_table_stats (''HMA_STAGE'', '''||table_name||''');' from dba_tables where owner='HMA_STAGE';

Friday, March 11, 2016

Opatch failed with executable is active

While patching a standalone database i got the error saying opatch failed with libclntsh.so.11.1 executable locked.

here is what i did to fix it


Prerequisite check "CheckActiveFilesAndExecutables" failed.
The details are:
Following executables are active :
/opt/app/oracle/product/11.2.0/11.2.0.4/lib/libclntsh.so.11.1


Run the below command. Note down the number 9795 in my case and kill it.

/sbin/fuser /opt/app/oracle/product/11.2.0/11.2.0.4/lib/libclntsh.so.11.1

/opt/app/oracle/product/11.2.0/11.2.0.4/lib/libclntsh.so.11.1:     9795m

kill -9 9795

Tuesday, March 1, 2016

Apply Patch 22191577 latest GI PSU to RAC and DB homes using Opatch auto or manual steps

Patch 22191577: GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.4.160119 (JAN2016)

Unzip the patch 22191577
Unzip latest Opatch Version in oracle home and crs home

Once this completed successfully make sure OPatch folder on both CRS and ORACLE HOME are owned by ORACLE:DBA

Opatch Version check: ./opatch version

Opatch Inventory check: ./opatch lsinventory

Opatch conflict checking: ./opatch prereq CheckConflictAmongPatchesWithDetail -phBaseDir /u02/patch/22191577

After this create OCM response file. You can create either in grid opatch folder or db home opatch folder This will be used if you are using opatch auto

In my case I created the file in both

/u01/app/oracle/product/11.2.0/dbhome_2/OPatch/ocm/bin/emocmrsp
/u01/app/11204/OPatch/ocm/bin/emocmrsp

Skip the email ID and select Y. A file name ocm.rsp will be created.



We have main patch folder 22191577

In that folder we have 3 patch folders

21948348 - DB and Grid Home
21948347 - DB and Grid Home
21948355 - Only Grid Home

You can find which patch is for which home in Patch read me document

In my case

ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_2/
GRID_HOME=/u01/app/11204/


************************************************************************

If using Opatch Auto user must be root. You dont need to shutdown anything OPatch does it automatically

opatch auto
Root user
#./opatch auto /u02/patch/22191577 -ocmrf /u01/app/oracle/product/11.2.0/dbhome_2/OPatch/ocm/bin/ocm.rsp

************************************************************************

If you choose to do it manually you need to shutdown instance and CRS manually

Shutdown instances on node1 as oracle user
srvctl stop instance -d prod -i prod1

Shutdown crs on node1 as root user
crsctl stop crs

Root user
cd $GRID_HOME/crs/install
./rootcrs.pl -unlock   (If you don't execute this you get some permission errors related to patch storage and it complains not able to read and write files)


Oracle user
Grid home
/u01/app/11204/OPatch/opatch napply -oh /u01/app/11204 -local /u02/patch/22191577/21948348

/u01/app/11204/OPatch/opatch napply -oh /u01/app/11204 -local /u02/patch/22191577/21948355

/u01/app/11204/OPatch/opatch apply -oh /u01/app/11204 -local /u02/patch/22191577/21948347
 
Oracle user
DB home
/u02/patch/22191577/21948348/custom/server/21948348/custom/scripts/prepatch.sh -dbhome /u01/app/oracle/product/11.2.0/dbhome_2

/u01/app/oracle/product/11.2.0/dbhome_2/OPatch/opatch napply -oh /u01/app/oracle/product/11.2.0/dbhome_2/. -local /u02/patch/22191577/21948348/custom/server/21948348

/u01/app/oracle/product/11.2.0/dbhome_2/OPatch/opatch apply -oh /u01/app/oracle/product/11.2.0/dbhome_2/ -local /u02/patch/22191577/21948347 

/u02/patch/22191577/21948348/custom/server/21948348/custom/scripts/postpatch.sh -dbhome /u01/app/oracle/product/11.2.0/dbhome_2

After completion
Root user
# /u01/app/oracle/product/11.2.0/dbhome_2/rdbms/install/rootadd_rdbms.sh
# /u01/app/11204/crs/install/rootcrs.pl -patch


Once you are done with patching first node1
Start the instance
srvctl start instance -d prod -i prod1

***********************************************************************

Once 1st instance is started repeat the same steps on node2

***********************************************************************

After the two instances started up

Refer the read me file to see post patch instructions. In this psu i have to do the following.

We should do this only in 1 instance.

cd $ORACLE_HOME/rdbms/admin
sqlplus /nolog
SQL> CONNECT / AS SYSDBA
SQL> STARTUP
SQL> @catbundle.sql psu apply
SQL> QUIT

After the above command is complete it will show if the log file location. You can grep it for errors.

You can also run @utlrp.sql and compile invalid objects in the database.

************************************************************************

Rollback instructions

If using opatch auto # opatch auto /u02/patch/22191577/ -rollback -ocmrf /u01/app/oracle/product/11.2.0/dbhome_2/OPatch/ocm/bin/ocm.rsp If doing manually..shutdown instance and CRS manually Shutdown instances on node1 as oracle user srvctl stop instance -d prod -i prod1 Shutdown crs on node1 as root user crsctl stop crs Root user cd $GRID_HOME/crs/install ./rootcrs.pl -unlock (If you don't execute this you get some permission errors related to patch storage and it complains not able to read and write files) Oracle user Grid home /u01/app/11204/OPatch/opatch rollback -local -id 21948348 -oh /u01/app/11204 /u01/app/11204/OPatch/opatch rollback -local -id 21948355 -oh /u01/app/11204 /u01/app/11204/OPatch/opatch rollback -local -id 21948347 -oh /u01/app/11204 Oracle user DB home /u02/patch/22191577/21948348/custom/server/21948348/custom/scripts/prepatch.sh -dbhome /u01/app/oracle/product/11.2.0/dbhome_2 /u01/app/oracle/product/11.2.0/dbhome_2/OPatch/opatch rollback -local -id 21948348 -oh /u01/app/oracle/product/11.2.0/dbhome_2 /u01/app/oracle/product/11.2.0/dbhome_2/OPatch/opatch rollback -local -id 21948347 -oh /u01/app/oracle/product/11.2.0/dbhome_2 /u02/patch/22191577/21948348/custom/server/21948348/custom/scripts/postpatch.sh -dbhome /u01/app/oracle/product/11.2.0/dbhome_2 After completion Root user # /u01/app/oracle/product/11.2.0/dbhome_2/rdbms/install/rootadd_rdbms.sh # /u01/app/11204/crs/install/rootcrs.pl -patch Once you are done with patching first node1 Start the instance srvctl start instance -d prod -i prod1 Repeat same steps on Node2 After both instances are UP. Run the following only on one instance cd $ORACLE_HOME/rdbms/admin sqlplus /nolog SQL> CONNECT / AS SYSDBA SQL> STARTUP SQL> @catbundle.sql psu apply SQL> QUIT Also run @utlrp.sql After everything is completed you can verify if patches are applied properly for both homes Go to Opatch folder cd /u01/app/oracle/product/11.2.0/dbhome_2/OPatch export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_2/ ./opatch lsinventory The above lists patches to oracle home. Now change ORACLE_HOME path to grid home export ORACLE_HOME=/u01/app/11204 ./opatch lsinventory Now it will list patches applied to grid home. To further check in DB. Run the following sql statements select comments, version, bundle_series from sys.registry$history where bundle_series = 'PSU' order by action_time; select comments Last_patch from dba_registry_history order by action_time desc;

Featured Post

Apply Patch 22191577 latest GI PSU to RAC and DB homes using Opatch auto or manual steps

Patch 22191577: GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.4.160119 (JAN2016) Unzip the patch 22191577 Unzip latest Opatch Version in or...