Monday, April 27, 2015

TFA Diag collection for certain period of time

You need to run as root or sudo.

sudo /etc/init.d/init.tfa  start


sudo /opt/app/crs/11.2.0/11.2.0.4/tfa/bin/tfactl diagcollect -from "Apr/26/2015 01:00:00" -to "Apr/26/2015 09:00:00"


sudo /etc/init.d/init.tfa stop

Friday, April 24, 2015

Add a log file in ASM

ALTER DATABASE ADD LOGFILE Instance 'DHODS1' GROUP 5
 SIZE 4096 M;

This simple commands adds redo log file with 2 members of size 4g. One member is located in a disk group and mirrored in other disk group if you have normal redundancy. No need to specify path. Please verify once done.

Thursday, April 23, 2015

Register instance to a listener

alter system set local_listener = '(address_list=(address=(protocol=tcp)(host=xxx.hke.local)(port=1521)))' scope=both sid='shods1';

Monday, April 20, 2015

Friday, April 17, 2015

RAC Issues 11gr2

Notes for RAC Installation


Symptom:
After one of the redundant Infiniband switches are powered off, one or more nodes lose connection to cluster and is/are evicted.

Cause:  The Infiniband configuration on the RAC nodes are different between the nodes

Error:
2014-01-03 10:58:12.376
[cssd(10047)]CRS-1612:Network communication with node kmaiptmsdb03 (3) missing for 50% of timeout interval.  Removal of this node from c
luster in 299.178 seconds
2014-01-03 11:00:42.434
[cssd(10047)]CRS-1611:Network communication with node kmaiptmsdb03 (3) missing for 75% of timeout interval.  Removal of this node from c
luster in 149.131 seconds
2014-01-03 11:01:22.151
[/opt/app/crs/11.2.0/11.2.0.3/bin/orarootagent.bin(10023)]CRS-5818:Aborted command 'check' for resource 'ora.crsd'. Details at (:CRSAGF0
0113:) {0:0:2} in /opt/app/crs/11.2.0/11.2.0.3/log/kmaiptmsdb01/agent/ohasd/orarootagent_root/orarootagent_root.log.
2014-01-03 11:02:12.468
[cssd(10047)]CRS-1610:Network communication with node kmaiptmsdb03 (3) missing for 90% of timeout interval.  Removal of this node from c
luster in 59.104 seconds
2014-01-03 11:03:11.593
[cssd(10047)]CRS-1607:Node kmaiptmsdb03 is being evicted in cluster incarnation 283963173; details at (:CSSNM00007:) in /opt/app/crs/11.
2.0/11.2.0.3/log/kmaiptmsdb01/cssd/ocssd.log.
2014-01-03 11:03:14.601
[cssd(10047)]CRS-1625:Node kmaiptmsdb03, number 3, was manually shut down
2014-01-03 11:03:20.546
[cssd(10047)]CRS-1601:CSSD Reconfiguration complete. Active nodes are kmaiptmsdb01 kmaiptmsdb02  

path_to_inst:"/pci@400/pci@2/pci@0/pci@8/pciex15b3,673c@0/ibport@1,ffff,ipib" 1 "ibd"
path_to_inst:"/pci@400/pci@2/pci@0/pci@8/pciex15b3,673c@0/ibport@2,ffff,ipib" 2 "ibd"
path_to_inst:"/pci@500/pci@2/pci@0/pci@a/pciex15b3,673c@0/ibport@1,ffff,ipib" 0 "ibd"
path_to_inst:"/pci@500/pci@2/pci@0/pci@a/pciex15b3,673c@0/ibport@2,ffff,ipib" 3 "ibd"





Problem #2:  11.2 RAC database fails to come up after restart

Symptom:  RAC database throws syntax error during startup phase

Cause:  The remote_listener parameter is not interpreted properly by 11.2 RAC database

Error:
ORA-00132: syntax error or unresolved network name hmaiptmsdb-scan:1521'

Solution:  Ensure EZCONNECT parameter is included in sqlnet.ora.  This is new behavior in 11.2 RAC.

NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)


*********************************************************************************************************************************


When a communication failure occurs 


If you are still facing the issue please follow below steps and upload the details from the failing node. 

A>. On the failing node force stop crs using "crsctl stop crs -f" 

B>. Delete the socket files available at the below location on the failing node 
<< you may need to be root for this >> 
/var/tmp/.oracle 
or 
/tmp/.oracle 
or 
/usr/tmp/.oracle 
<< rm -rf /var/tmp/.oracle >> 

C>. on the failing node issue 
ps -ef | grep 'init d.bin' | grep -v grep 
kill any d.bin process on the failing node 

D>. Then kill the gipcd on the good node and let it respawn automatically. 
That should allow the gipcd on both nodes to talk to each other over the private interconnect. 
You can kill gipcd process by getting the process id of gipcd 
(issue ps -ef | grep gipcd.bin" to find the gipcd.bin pid). 

E>. start crs on the failing node using "date; crsctl start crs" 

For verification that we've used this process before you can check: 
#1>11gR2 GI Node May not Join the Cluster After Private Network is Functional After Eviction due to Private Network Problem ( Doc ID 1479380.1 ) 
#2>Due to bug 13899736 - Node cannot join the cluster after reboot or "interconnect restored" 



*********************************************************************************************************************************

Oracle support may ask you to take a system dump when hang occurs
Here are the steps
System dump gather


sqlplus / as sysdba 
SQL> oradebug setospid <ospid of diag process> 
SQL> oradebug unlimit 
SQL> oradebug -g all hanganalyze 3 
##..Wait about 1-2 minutes 
SQL> oradebug -g all hanganalyze 3 
SQL> oradebug -g all dump systemstate 258 

If you can NOT connect to the instance as / as sysdba, you can use prelim however hanganalyze will not be possible with prelim: 
sqlplus -prelim '/ as sysdba' 
SQL> oradebug setospid <ospid of diag process> 
SQL> oradebug unlimit 
SQL> oradebug -g all dump systemstate 258 

Additional details for collecting system state dumps can be found in MOS note: 452358.1. 

******************************************************
If you have large SGA than 100gb and running rac please follow the recommendations here

a.      Set _lm_sync_timeout to 1200 
           Setting this will prevent some timeouts during reconfiguration and DRM

b.      Set _ksmg_granule_size to 134217728
           Setting this will cut down the time needed to locate the resource for a data block.

c.      Set shared_pool_size to 15% or larger of the total SGA size.
        For example, if SGA size is 1 TB, the shared pool size should be at least 150 GB.

d.      Set _gc_policy_minimum to 15000
        There is no need to set _gc_policy_minimum if DRM is disabled by setting _gc_policy_time = 0

e.      Set _lm_tickets to 5000
        Default is 1000.   Allocating more tickets (used for sending messages) avoids issues where we ran out of tickets during the reconfiguration. 

f.      Set gcs_server_processes to the twice the default number of lms processes that are allocated.
        The default number of lms processes depends on the number of CPUs/cores that the server has, 
        so please refer to the gcs_server_processes init.ora parameter section in the Oracle Database Reference Guide 
        for the default number of lms processes for your server.  Please make sure that the total number of lms processes 
        of all databases on the server is less than the total number of CPUs/cores on the server.  Please refer to the Document 558185.1
******************************************************
Also consider changing these

2). Set "_gc_read_mostly_locking"=FALSE 
This disables read mostly locking, this feature helps in an environment where the objects are mostly read without being modified. Once an object is identified as read-mostly, 
share locks are granted immediately, to reduce nodes constantly opening and closing share locks and sending lots of messages. 
example: 
/opt/oradiag/diag/rdbms/phsub/phsub1/trace/phsub1_lmd0_8457.trc 
*** 2016-02-03 01:36:21.136 
Begin DRM(12080) (swin 1) - READMOSTLY transfer pkey 96063.0 to 1 oscan 1.1 
kjiobjscn 1 
3). Set _gc_policy_minimum=15000 
This reduces the chances of 'affinity' DRM to happen, the default in 11.2.0.3 is 1500 (note this was 6000 in 10.2.0.5). By setting to 15000 we are saying an object must be accessed 250 times a second by a node to initiate affinity, 
the default was 25 (100 in 10.2.0.5) 
example: 
READMOSTLY object id 96063.0, objscan 1.1, create affinity to instance 1 
Total pkey count in this drm 1 
* drm quiesce 
4). Another workaround is increasing the number of Lock Elements by setting _gc_element_percent=140 (default is 110 in 11.2, 120 in 12.1, and this may increase to 140 in 12.2). 
This will increase the number of LEs to 140% of number of data block buffers in the buffer cache. This will result in a slight increase in the shared pool usage. 
5). Verify the number of LSM processes - you should have 4- 
show parameter gcs_serverprocesses 

Thursday, April 16, 2015

DATA Vault expdp error

Even using system  because of DB vault i am getting some insufficient priveleges error.

So when you have db vault installed

Run the following statement as Database vault owner and you are good to go

EXEC DVSYS.DBMS_MACADM.AUTHORIZE_DATAPUMP_USER('XXX');


here replace xxx with username you are trying to export with

Wednesday, April 15, 2015

Refresh roles in database scheduler job

So take a schema name hma_prod and it has several tables and new tables are being added frequently. Users who need read access to these schemas will assigned to HMA_PROD_RO_ROLE.

It is a pain to grant each and every table every time you add a new one. So i set up this daily scheduler job that runs every night and refreshes roles.

For this i created a schema with dba role with access to all access in database.

Create the following procedure in the schema

CREATE OR REPLACE PROCEDURE HISNA_DBA.REFRESH_SCHEMA_PRIVS (p_schema IN VARCHAR2, p_role IN VARCHAR2, p_privilege IN VARCHAR2)
AUTHID CURRENT_USER IS
v_exec_string VARCHAR2(300);

CURSOR c_tables IS
SELECT table_name
FROM sys.dba_tables
WHERE owner = p_schema;

CURSOR c_views IS
SELECT object_name
FROM sys.dba_objects
WHERE owner = p_schema
AND status='VALID'
AND object_type='VIEW';

grant_option EXCEPTION;
PRAGMA EXCEPTION_INIT(grant_option, -1720);
BEGIN
IF (p_schema IS NULL OR p_role IS NULL OR p_privilege IS NULL) THEN
dbms_output.put_line('One of the mandatory parameters are blank.');
dbms_output.put_line('Usage: refresh_schema_privs(<schema_name>, <role>, [ RO | RW ])');
ELSE
FOR r_tables IN c_tables
LOOP
BEGIN
IF UPPER(p_privilege) = 'RO' THEN
v_exec_string := 'GRANT SELECT ON ' || p_schema || '.' || r_tables.table_name || ' TO ' || p_role;
ELSIF UPPER(p_privilege) = 'RW' THEN
v_exec_string := 'GRANT SELECT, INSERT, UPDATE, DELETE, ALTER ON ' || p_schema || '.' || r_tables.table_name || ' TO ' || p_role;
ELSE
dbms_output.put_line('Invalid Input for Parameter privilege.');
END IF;

EXECUTE IMMEDIATE v_exec_string;

EXCEPTION
WHEN grant_option THEN
CONTINUE;
END;
END LOOP;

FOR r_views in c_views
LOOP
BEGIN
EXECUTE IMMEDIATE 'GRANT SELECT ON ' || p_schema || '.' || r_views.object_name || ' TO ' || p_role;
EXCEPTION
WHEN grant_option THEN
CONTINUE;
END;
END LOOP;
END IF;
END;
/


After the procedure is created.. create a scheduler job


BEGIN
  SYS.DBMS_SCHEDULER.CREATE_JOB
    (
       job_name        => 'HISNA_DBA.DAILY_SCHEMA_REFRESH_PRIVS'
      ,start_date      => TO_TIMESTAMP_TZ('2014/03/18 09:32:40.131457 America/Los_Angeles','yyyy/mm/dd hh24:mi:ss.ff tzr')
      ,repeat_interval => 'FREQ=DAILY;BYHOUR=1;BYMINUTE=0;BYSECOND=0'
      ,end_date        => NULL
      ,job_class       => 'DEFAULT_JOB_CLASS'
      ,job_type        => 'PLSQL_BLOCK'
      ,job_action      => 'begin
   HISNA_DBA.REFRESH_SCHEMA_PRIVS (''HMA_PROD'', ''HMA_PROD_RO_ROLE'', ''RO'');
end;'
      ,comments        => 'Job to refresh the schema level privileges daily'
    );
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
    ( name      => 'HISNA_DBA.DAILY_SCHEMA_REFRESH_PRIVS'
     ,attribute => 'RESTARTABLE'
     ,value     => FALSE);
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
    ( name      => 'HISNA_DBA.DAILY_SCHEMA_REFRESH_PRIVS'
     ,attribute => 'LOGGING_LEVEL'
     ,value     => SYS.DBMS_SCHEDULER.LOGGING_OFF);
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL
    ( name      => 'HISNA_DBA.DAILY_SCHEMA_REFRESH_PRIVS'
     ,attribute => 'MAX_FAILURES');
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL
    ( name      => 'HISNA_DBA.DAILY_SCHEMA_REFRESH_PRIVS'
     ,attribute => 'MAX_RUNS');
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
    ( name      => 'HISNA_DBA.DAILY_SCHEMA_REFRESH_PRIVS'
     ,attribute => 'STOP_ON_WINDOW_CLOSE'
     ,value     => FALSE);
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
    ( name      => 'HISNA_DBA.DAILY_SCHEMA_REFRESH_PRIVS'
     ,attribute => 'JOB_PRIORITY'
     ,value     => 3);
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE_NULL
    ( name      => 'HISNA_DBA.DAILY_SCHEMA_REFRESH_PRIVS'
     ,attribute => 'SCHEDULE_LIMIT');
  SYS.DBMS_SCHEDULER.SET_ATTRIBUTE
    ( name      => 'HISNA_DBA.DAILY_SCHEMA_REFRESH_PRIVS'
     ,attribute => 'AUTO_DROP'
     ,value     => FALSE);

  SYS.DBMS_SCHEDULER.ENABLE
    (name                  => 'HISNA_DBA.DAILY_SCHEMA_REFRESH_PRIVS');
END;
/


Please note modify the role name according to your requirement. You can add as many roles as you want just seperating with ;

Tuesday, April 14, 2015

Drop temp datafile ORA-25152

I got the error while dropping temp data file

Run this query
SELECT a.INST_ID,b.TABLESPACE
, b.segfile#
, b.segblk#
, ROUND ( ( ( b.blocks * p.VALUE ) / 1024 / 1024 ), 2 ) size_mb
, a.SID
, a.serial#
, a.username
, a.osuser
, a.program
, a.status
FROM gv$session a
, gv$sort_usage b
, gv$process c
, gv$parameter p
WHERE p.NAME = 'db_block_size'
AND a.saddr = b.session_addr
AND a.paddr = c.addr
ORDER BY b.TABLESPACE
, b.segfile#
, b.segblk#
, b.blocks;


Kill listed sessions and drop temp file

Monday, April 13, 2015

drop partition using shell and sql (2 shell scripts)

We have a partitioned table with partitions like

zzt_service_log_2015MAY
zzt_service_log_2015APR
zzt_service_log_2015JUN

this table growth is huge and we dont need older data. So we need to drop partitions every month to reclaim space.

So my plan is to write a script and run it on 1st of every month so it will delete the old partitions.

Say for example if today is may 1st we will retain april partition and delete march partition.

So  since only last 3 letters of my partitions are changing i used dual from sql to get the month.

I used sysdate-45 and run on first of every month so it will return the month before previous month. We can do this using shell as well but i used sqlplus for this.

here is the command for that

SELECT TO_CHAR(SYSDATE-45,'MON') FROM DUAL;

The above will return 3 letters of month name in Uppercase which we use in partitions.

So first i tried in staging and here is the script for that. Worked like charm and no issues

#!/bin/ksh
export ORACLE_SID=shsub
export ORACLE_HOME=/opt/app/oracle/product/11.2.0/11.2.0.4
count=$(sqlplus -s '/@shsub_sys as sysdba'<<EOF
  set pages 0 feed off
  SELECT TO_CHAR(SYSDATE-45,'MON') FROM DUAL;
EOF
)
sqlplus /@shsub_sys as sysdba <<EOF
alter table hma_tm_prod_gen2_monitor.zzt_service_log drop partition zzt_service_log_2015$count update global indexes;
exit
EOF


Then i tried in production and i got so many errors. Server settings may be? So with the help of my friend we wrote another script for production and this is how scripts look like

#!/bin/ksh
export ORACLE_SID=phsub1
export ORACLE_HOME=/opt/app/oracle/product/11.2.0/11.2.0.4
function run_one
{
sqlplus -s "/ as sysdba" << EOF
set heading off;
SET pages 0;
SET echo off;
SET feed off;
SELECT TO_CHAR(SYSDATE-45,'MON') FROM DUAL;
quit
EOF
}
count=`run_one`

sqlplus -s "/ as sysdba" <<EOF
alter table hma_tm_prod_gen2_monitor.zzt_service_log drop partition zzt_service_log_2015$count update global indexes;
exit
EOF

Then i setup a cron to run on 1st of everymonth. All set :)

Tuesday, April 7, 2015

Cleaning Sysaux tablespace

Cleaning SYSAUX tablespace

1. Running @?/rdbms/admin/awrinfo.sql –> Detailed Info like Schema breakdown, SYSAUX occupants space usage etc.

This will give whats using most.

So Narrow down from here. In case its AWR with WRH tables. Then check the snapshot max and min values of particular tables

SQL> select min(snap_id),MAX(snap_id) from dba_hist_snapshot;

MIN(SNAP_ID) MAX(SNAP_ID)
———— ————
49032 49210

SQL> select min(snap_id),MAX(snap_id) from WRH$_ACTIVE_SESSION_HISTORY;

MIN(SNAP_ID) MAX(SNAP_ID)
———— ————
45912 49210

SQL> select min(snap_id),MAX(snap_id) from WRH$_EVENT_HISTOGRAM;

MIN(SNAP_ID) MAX(SNAP_ID)
———— ————
20121 49210

SQL> select min(snap_id),MAX(snap_id) from WRH$_SQL_BIND_METADATA;

MIN(SNAP_ID) MAX(SNAP_ID)
———— ————
19976 49260

In the above case something wrong with event_histogram and sql_bind_metadata.

These have way old information. This is because oracle purge job runs in maintainance windows and if query is taking too long to purge it will skip it. Thats why these are piled up.

So check the partition sof these tables to get a clear idea

if you see 2 partitions for each table then run

SQL> alter session set "_swrf_test_action" = 72; ( force split the partition operation using command)



this will create a new partition

We have to delete old snapshots to reclaim space in the above case delete from 19976 to 45912. Partition name has db id init. Note down these 3 values

EXEC DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE( 19976,45912,2779876350);

1st one is begin, 2nd one is end and 3rd one is dbid.

Run this..This is resource intensive. Make sure you have sufficient undo tbs.


Tha above did not work for me as we dont have sufficient undo TBS

So i created partitions on the tables that have old snapshots and deleted manually




Splitting Partition and dropping partition

ALTER TABLE WRH$_PARAMETER
 SPLIT PARTITION   WRH$_PARAME_3251756137_17280 AT
 (3251756137, 27679)
 INTO (PARTITION   WRH$_PARAME_3251756137_ZZZ
       TABLESPACE SYSAUX
       PCTFREE    1
       INITRANS   1
       MAXTRANS   255
       STORAGE    (
                   INITIAL          64K
                   NEXT             1M
                   MAXSIZE          UNLIMITED
                   MINEXTENTS       1
                   BUFFER_POOL      DEFAULT
                   FLASH_CACHE      DEFAULT
                   CELL_FLASH_CACHE DEFAULT
                  ),
       PARTITION   WRH$_PARAME_3251756137_17280)
  UPDATE GLOBAL INDEXES;

ALTER TABLE WRH$_SEG_STAT
 SPLIT PARTITION   WRH$_SEG_ST_3251756137_17280 AT
 (3251756137, 27679)
 INTO (PARTITION   WRH$_SEG_ST_3251756137_ZZZ
       TABLESPACE SYSAUX
       PCTFREE    1
       INITRANS   1
       MAXTRANS   255
       STORAGE    (
                   INITIAL          64K
                   NEXT             1M
                   MAXSIZE          UNLIMITED
                   MINEXTENTS       1
                   BUFFER_POOL      DEFAULT
                   FLASH_CACHE      DEFAULT
                   CELL_FLASH_CACHE DEFAULT
                  ),
       PARTITION   WRH$_SEG_ST_3251756137_17280)
  UPDATE GLOBAL INDEXES;

ALTER TABLE WRH$_DLM_MISC
 SPLIT PARTITION   WRH$_DLM_MI_3251756137_17280 AT
 (3251756137, 27679)
 INTO (PARTITION   WRH$_DLM_MI_3251756137_ZZZ
       TABLESPACE SYSAUX
       PCTFREE    1
       INITRANS   1
       MAXTRANS   255
       STORAGE    (
                   INITIAL          64K
                   NEXT             1M
                   MAXSIZE          UNLIMITED
                   MINEXTENTS       1
                   BUFFER_POOL      DEFAULT
                   FLASH_CACHE      DEFAULT
                   CELL_FLASH_CACHE DEFAULT
                  ),
       PARTITION   WRH$_DLM_MI_3251756137_17280)
  UPDATE GLOBAL INDEXES;

ALTER TABLE WRH$_SERVICE_STAT
 SPLIT PARTITION    WRH$_SERVIC_3251756137_17280 AT
 (3251756137, 27679)
 INTO (PARTITION    WRH$_SERVIC_3251756137_ZZZ
       TABLESPACE SYSAUX
       PCTFREE    1
       INITRANS   1
       MAXTRANS   255
       STORAGE    (
                   INITIAL          64K
                   NEXT             1M
                   MAXSIZE          UNLIMITED
                   MINEXTENTS       1
                   BUFFER_POOL      DEFAULT
                   FLASH_CACHE      DEFAULT
                   CELL_FLASH_CACHE DEFAULT
                  ),
       PARTITION    WRH$_SERVIC_3251756137_17280)
  UPDATE GLOBAL INDEXES;

ALTER TABLE WRH$_TABLESPACE_STAT
 SPLIT PARTITION   WRH$_TABLES_3251756137_17280 AT
 (3251756137, 27679)
 INTO (PARTITION   WRH$_TABLES_3251756137_ZZZ
       TABLESPACE SYSAUX
       PCTFREE    1
       INITRANS   1
       MAXTRANS   255
       STORAGE    (
                   INITIAL          64K
                   NEXT             1M
                   MAXSIZE          UNLIMITED
                   MINEXTENTS       1
                   BUFFER_POOL      DEFAULT
                   FLASH_CACHE      DEFAULT
                   CELL_FLASH_CACHE DEFAULT
                  ),
       PARTITION   WRH$_TABLES_3251756137_17280)
  UPDATE GLOBAL INDEXES;

ALTER TABLE WRH$_OSSTAT
 SPLIT PARTITION   WRH$_OSSTAT_3251756137_17280 AT
 (3251756137, 27679)
 INTO (PARTITION   WRH$_OSSTAT_3251756137_ZZZ
       TABLESPACE SYSAUX
       PCTFREE    1
       INITRANS   1
       MAXTRANS   255
       STORAGE    (
                   INITIAL          64K
                   NEXT             1M
                   MAXSIZE          UNLIMITED
                   MINEXTENTS       1
                   BUFFER_POOL      DEFAULT
                   FLASH_CACHE      DEFAULT
                   CELL_FLASH_CACHE DEFAULT
                  ),
       PARTITION   WRH$_OSSTAT_3251756137_17280)
  UPDATE GLOBAL INDEXES;

ALTER TABLE  WRH$_SYS_TIME_MODEL
 SPLIT PARTITION   WRH$_SYS_TI_3251756137_17280 AT
 (3251756137, 27679)
 INTO (PARTITION   WRH$_SYS_TI_3251756137_ZZZ
       TABLESPACE SYSAUX
       PCTFREE    1
       INITRANS   1
       MAXTRANS   255
       STORAGE    (
                   INITIAL          64K
                   NEXT             1M
                   MAXSIZE          UNLIMITED
                   MINEXTENTS       1
                   BUFFER_POOL      DEFAULT
                   FLASH_CACHE      DEFAULT
                   CELL_FLASH_CACHE DEFAULT
                  ),
       PARTITION   WRH$_SYS_TI_3251756137_17280)
  UPDATE GLOBAL INDEXES;

ALTER TABLE  WRH$_SERVICE_WAIT_CLASS
 SPLIT PARTITION   WRH$_SERVIC_3251756137_17280 AT
 (3251756137, 27679)
 INTO (PARTITION   WRH$_SERVIC_3251756137_ZZZ
       TABLESPACE SYSAUX
       PCTFREE    1
       INITRANS   1
       MAXTRANS   255
       STORAGE    (
                   INITIAL          64K
                   NEXT             1M
                   MAXSIZE          UNLIMITED
                   MINEXTENTS       1
                   BUFFER_POOL      DEFAULT
                   FLASH_CACHE      DEFAULT
                   CELL_FLASH_CACHE DEFAULT
                  ),
       PARTITION   WRH$_SERVIC_3251756137_17280)
  UPDATE GLOBAL INDEXES;

ALTER TABLE WRH$_INST_CACHE_TRANSFER
 SPLIT PARTITION   WRH$_INST_C_3251756137_17280 AT
 (3251756137, 27679)
 INTO (PARTITION   WRH$_INST_C_3251756137_ZZZ
       TABLESPACE SYSAUX
       PCTFREE    1
       INITRANS   1
       MAXTRANS   255
       STORAGE    (
                   INITIAL          64K
                   NEXT             1M
                   MAXSIZE          UNLIMITED
                   MINEXTENTS       1
                   BUFFER_POOL      DEFAULT
                   FLASH_CACHE      DEFAULT
                   CELL_FLASH_CACHE DEFAULT
                  ),
       PARTITION   WRH$_INST_C_3251756137_17280)
  UPDATE GLOBAL INDEXES;

ALTER TABLE WRH$_INTERCONNECT_PINGS
 SPLIT PARTITION   WRH$_INTERC_3251756137_17280 AT
 (3251756137, 27679)
 INTO (PARTITION   WRH$_INTERC_3251756137_ZZZ
       TABLESPACE SYSAUX
       PCTFREE    1
       INITRANS   1
       MAXTRANS   255
       STORAGE    (
                   INITIAL          64K
                   NEXT             1M
                   MAXSIZE          UNLIMITED
                   MINEXTENTS       1
                   BUFFER_POOL      DEFAULT
                   FLASH_CACHE      DEFAULT
                   CELL_FLASH_CACHE DEFAULT
                  ),
       PARTITION   WRH$_INTERC_3251756137_17280)
  UPDATE GLOBAL INDEXES;

ALTER TABLE WRH$_MVPARAMETER
 SPLIT PARTITION   WRH$_MVPARA_3251756137_17280 AT
 (3251756137, 27679)
 INTO (PARTITION   WRH$_MVPARA_3251756137_ZZZ
       TABLESPACE SYSAUX
       PCTFREE    1
       INITRANS   1
       MAXTRANS   255
       STORAGE    (
                   INITIAL          64K
                   NEXT             1M
                   MAXSIZE          UNLIMITED
                   MINEXTENTS       1
                   BUFFER_POOL      DEFAULT
                   FLASH_CACHE      DEFAULT
                   CELL_FLASH_CACHE DEFAULT
                  ),
       PARTITION   WRH$_MVPARA_3251756137_17280)
  UPDATE GLOBAL INDEXES;




select * from dba_ind_partitions where status='UNUSABLE';
alter index WRH$_EVENT_HISTOGRAM_PK rebuild partition WRH$_EVENT__3251756137_17280;
alter index WRH$_SYSSTAT_PK rebuild partition WRH$_SYSSTA_3251756137_0;
alter index WRH$_SYSSTAT_PK rebuild partition WRH$_SYSSTA_3251756137_17280;
alter index WRH$_SYSSTAT_PK rebuild partition WRH$_SYSSTAT_MXDB_MXSN;



alter table WRH$_DB_CACHE_ADVICE shrink space;
alter table WRH$_DLM_MISC shrink space;
alter table WRH$_EVENT_HISTOGRAM shrink space;
alter table WRH$_FILESTATXS shrink space;
alter table WRH$_INST_CACHE_TRANSFER shrink space;
alter table WRH$_INTERCONNECT_PINGS shrink space;
alter table WRH$_LATCH shrink space;
alter table WRH$_LATCH_CHILDREN shrink space;
alter table WRH$_LATCH_MISSES_SUMMARY shrink space;
alter table WRH$_LATCH_PARENT shrink space;
alter table WRH$_MVPARAMETER shrink space;
alter table WRH$_OSSTAT shrink space;
alter table WRH$_PARAMETER shrink space;
alter table WRH$_ROWCACHE_SUMMARY shrink space;
alter table WRH$_SEG_STAT shrink space;
alter table WRH$_SERVICE_STAT shrink space;
alter table WRH$_SERVICE_WAIT_CLASS shrink space;
alter table WRH$_SGASTAT shrink space;
alter table WRH$_SQLSTAT shrink space;
alter table WRH$_SYSSTAT shrink space;
alter table WRH$_SYSTEM_EVENT shrink space;
alter table WRH$_SYS_TIME_MODEL shrink space;
alter table WRH$_TABLESPACE_STAT shrink space;
alter table WRH$_WAITSTAT shrink space;




select 'Alter table  '||TABLE_NAME||' DROP PARTITION '||PARTITION_NAME||' update global indexes;' from dba_tab_partitions where table_name in ('WRH$_DLM_MISC'
,'WRH$_INST_CACHE_TRANSFER'
,'WRH$_INTERCONNECT_PINGS'
,'WRH$_MVPARAMETER'
,'WRH$_OSSTAT'
,'WRH$_PARAMETER'
,'WRH$_SEG_STAT'
,'WRH$_SERVICE_STAT'
,'WRH$_SERVICE_WAIT_CLASS'
,'WRH$_SYS_TIME_MODEL'
,'WRH$_TABLESPACE_STAT'
) and partition_name like '%17280';


Featured Post

Apply Patch 22191577 latest GI PSU to RAC and DB homes using Opatch auto or manual steps

Patch 22191577: GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.4.160119 (JAN2016) Unzip the patch 22191577 Unzip latest Opatch Version in or...