Friday, August 28, 2015

rman point in time recovery

run
{
allocate channel ch1 type disk;
set until time "to_date('2015-08-28:09:57:14', 'yyyy-mm-dd:hh24:mi:ss')";
restore database;
recover database;
}

Wednesday, August 26, 2015

sql trace by session, sid and system

We can enable sql trace in various ways.. Here are a few

Credits to : https://dbaclass.com/article/tracing-sessions-in-oracle/

1. Enabling tracing for all session of a user.
For this we need to create a trigger.

CREATE OR REPLACE TRIGGER USER_TRACING_SESSION
AFTER LOGON ON DATABASE
BEGIN
IF USER = 'SIEBEL'THEN
execute immediate 'alter session set events ''10046 trace name context forever, level 12''';
END IF;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
/


2. Enabling trace for a single session(using dbms_system)

SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>123, serial#=>1234, sql_trace=>TRUE)

---To disable

SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>123, serial#=>1234, sql_trace=>FALSE);

--- Get the tracefile name:

SELECT p.tracefile FROM   v$session s  JOIN v$process p ON s.paddr = p.addr WHERE  s.sid = 123;
TRACEFILE
------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/db11g/db11g/trace/db11g_ora_9699.trc

-- Use tkprof to generate readable file

tkprof /u01/app/oracle/diag/rdbms/db11g/db11g/trace/db11g_ora_9699.trc   trace_output.txt


3.  Enabling trace using oradebug.


--Get the spid from sid.

SELECT p.spid FROM gv$session s JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id and s.sid=1105;

SPID
-----------------
3248

--- Enable tracing for that spid

SQL> oradebug setospid 3248
Oracle pid: 92, Unix process pid: 3248, image: oracle@sec58-6
SQL> oradebug EVENT 10046 trace name context forever, level 12
Statement processed.

-- Find the trace file name

SQL> oradebug TRACEFILE_NAME

/oracle/app/oracle/diag/rdbms/b2crmd2/B2CRMD2/trace/B2CRMD2_ora_3248.trc

-- Disabling trace:



SQL> oradebug setospid 3248
Oracle pid: 92, Unix process pid: 3248, image: oracle@sec58-6
SQL> oradebug event 10046 trace name context off
Statement processed.


4. 10053 trace:
10053 trace is is known as optimizer trace. Below are steps generating 10053 trace for a sql statement.

Note: To generate 10053 trace, we need to hard parse the query, So flush the sql statement from shared pool .

--- set tracefile name

SQL>alter session set tracefile_identifier='TESTOBJ_TRC';

Session altered.

SQL>alter session set events '10053 trace name context forever ,level 1';

Session altered.

-- hard parse the statement

SQL>Select count(*) from TEST_OBJ;

COUNT(*)
----------
33091072

exit

-- trace file name:

/u01/app/oracle/admin/BBCRMST1/diag/rdbms/bbcrmst1/BBCRMST1/trace/BBCRMST1_ora_9046_TESTOBJ_TRC.trc

Alternatively you can generate the 10053 trace, without executing or without hardparsing the sql statement using DBMS_SQLDIAG

suppose sql_id = dmx08r6ayx800
output trace_file=TEST_OBJ3_TRC

begin
dbms_sqldiag.dump_trace(p_sql_id=>'dmx08r6ayx800',
                        p_child_number=>0,
                        p_component=>'Compiler',
                        p_file_id=>'TEST_OBJ3_TRC');
END;
/


-- Trace file

-bash-4.1$ ls -ltr BBCRMST1_ora_27439_TEST_OBJ3_TRC.trc
-rw-r-----   1 oracle   oinstall  394822 Jun 30 14:17 BBCRMST1_ora_27439_TEST_OBJ3_TRC.trc

Start session trace

To start a SQL trace for the current session, execute:
ALTER SESSION SET sql_trace = true;
You can also add an identifier to the trace file name for later identification:
ALTER SESSION SET sql_trace = true;
ALTER SESSION SET tracefile_identifier = mysqltrace;

[edit]Stop session trace

To stop SQL tracing for the current session, execute:
ALTER SESSION SET sql_trace = false;

[edit]Tracing other user's sessions

DBA's can use DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION to trace problematic database sessions. Steps:
  • Get the SID and SERIAL# for the process you want to trace.
SQL> select sid, serial# from sys.v_$session where ...
       SID    SERIAL#
---------- ----------
         8      13607
  • Enable tracing for your selected process:
SQL> ALTER SYSTEM SET timed_statistics = true;
SQL> execute dbms_system.set_sql_trace_in_session(8, 13607, true);
  • Ask user to run just the necessary to demonstrate his problem.
  • Disable tracing for your selected process:
SQL> execute dbms_system.set_sql_trace_in_session(8,13607, false);
  • Look for trace file in USER_DUMP_DEST:
$ cd /app/oracle/admin/oradba/udump
$ ls -ltr
total 8
-rw-r-----    1 oracle   dba         2764 Mar 30 12:37 ora_9294.trc

********************************************************************************
Few other useful commands

SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>123, serial#=>1234, sql_trace=>TRUE);
SQL> EXEC DBMS_SYSTEM.set_sql_trace_in_session(sid=>123, serial#=>1234, sql_trace=>FALSE);

SQL> EXEC DBMS_SYSTEM.set_ev(si=>123, se=>1234, ev=>10046, le=>8, nm=>' ');
SQL> EXEC DBMS_SYSTEM.set_ev(si=>123, se=>1234, ev=>10046, le=>0, nm=>' ');

Tracing Individual SQL Statements

SQL trace can be initiated for an individual SQL statement by substituting the required SQL_ID into the following statement.
SQL> ALTER SESSION SET EVENTS 'trace[rdbms.SQL_Optimizer.*][sql:sql_id]';
alter system set events '8103 trace name errorstack forever, level 3'; 

To turn it off 

alter system set events '8103 trace name errorstack off, level 3'; 

If you want to turn at session level( Helpful when you are able to reproduce error when executed from your session)

Please try to reproduce the error as we need a tracefile to investigate. If possible please run the procedures until we see it reproduce. 

oradebug setmypid 

alter session set max_dump_file_size=unlimited; 
alter session set db_file_multiblock_read_count=1; 
alter session set events 'immediate trace name trace_buffer_on level 1048576'; 
alter session set events '10200 trace name context forever, level 1'; 
alter session set events '8103 trace name errorstack level 3'; 
alter session set events '10236 trace name context forever, level 1'; 
alter session set tracefile_identifier='ORA8103'; 

run the query that produces the error ORA-8103 

alter session set events 'immediate trace name trace_buffer_off'; 
oradebug tracefile_name; 
exit 

When an error is reported please upload the tracefile identified above by oradebug tracefile_name 

Step 3:- Please upload your /var/adm/messages file 


Data pump sql trace

7. How to get SQL trace files of the Data Pump processes ?

For troubleshooting specific situations, it may be required to create a SQL trace file for an Export Data Pump or Import Data Pump job. These SQL trace files can be created by setting Event 10046 for a specific process (usually the Worker process). Note that these SQL trace files can become very large, so ensure that there is enough free space in the directory that is specified by the init.ora/spfile initialization parameter BACKGROUND_DUMP_DEST.
event 10046, level 1 = enable standard SQL_TRACE functionality
event 10046, level 4 = as level 1, plus trace the BIND values
event 10046, level 8 = as level 1, plus trace the WAITs
event 10046, level 12 = as level 1, plus trace the BIND values and the WAITs
Remarks:
  • level 1: lowest level tracing - not always sufficient to determine cause of errors;
  • level 4: useful when an error in Data Pump's worker or master process occurs;
  • level 12: useful when there is an issue with Data Pump performance.
When creating a level 8 or 12 SQL trace file, it is required that the init.ora/spfile initialization parameter TIMED_STATISTICS is set to TRUE before the event is set and before the Data Pump job is started. The performance impact of setting this parameter temporary to TRUE is minimal. The SQL trace files that were created with level 8 or 12 as especially useful for investigating performance problems.
Example:
-- For Event 10046, level 8 and 12: ensure we gather time related statistics:

CONNECT / as sysdba
SHOW PARAMETER timed_statistics

NAME                              TYPE        VALUE
--------------------------------- ----------- ---------------------------
timed_statistics                  string      FALSE

ALTER SYSTEM SET timed_statistics = TRUE SCOPE = memory;

-- Now set the event and start the Data Pump job


-- To set the value back to the default:

ALTER SYSTEM SET timed_statistics = FALSE SCOPE = memory; 

7.1. Create a standard SQL_TRACE file (level 1).

If the output of standard SQL_TRACE functionality is sufficient (i.e.: neither bind values nore waits details are needed), then this SQL tracing can be activated with the Data Pump parameter trace. To activate standard SQL tracing, use the value 1.
Example:
-- Trace Worker process (400300) with standard SQL_TRACE functionality (1):

% expdp system/manager DIRECTORY=my_dir DUMPFILE=expdp_f.dmp \
LOGFILE=expdp_f.log TABLES=scott.emp TRACE=400301 
Note that this level of tracing is usually not sufficient for tracing Data Pump when an error occurs or when there is an issue with Data Pump performance. For tracing Data Pump when an error occurs use level 4, and when there is an issue with Data Pump performance use level 12 (see sections below).

7.2. Activate SQL_TRACE on specific Data Pump process with higher trace level.

If a specific Data Pump process needs to traced, and more SQL_TRACE details are required, and it is not required to trace the start of the job, then the Event 10046 with the desired level can also be set on the process that needs to be traced (usually the Worker process).
Example:
- Start the Data Pump job, e.g.:

% expdp system/manager DIRECTORY=my_dir DUMPFILE=expdp_f%U.dmp \  
LOGFILE=expdp_f.log FILESIZE=2G FULL=y


-- In SQL*Plus, obtain Data Pump process info:
CONNECT / as sysdba

set lines 150 pages 100 numwidth 7
col program for a38
col username for a10
col spid for a7
select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid,  
       s.status, s.username, d.job_name, p.spid, s.serial#, p.pid  
  from v$session s, v$process p, dba_datapump_sessions d 
 where p.addr=s.paddr and s.saddr=d.saddr; 

DATE                PROGRAM                                    SID STATUS
------------------- -------------------------------------- ------- --------
2007-10-19 08:58:41 ude@celclnx7.us.oracle.com (TNS V1-V3)     158 ACTIVE
2007-10-19 08:58:41 oracle@celclnx7.us.oracle.com (DM00)       143 ACTIVE
2007-10-19 08:58:41 oracle@celclnx7.us.oracle.com (DW01)       150 ACTIVE

USERNAME   JOB_NAME                       SPID    SERIAL#     PID
---------- ------------------------------ ------- ------- -------
SYSTEM     SYS_EXPORT_FULL_01             17288        29      18
SYSTEM     SYS_EXPORT_FULL_01             17292        50      22
SYSTEM     SYS_EXPORT_FULL_01             17294        17      23  
In the example output above we see that the Data Pump Master process (DM00) has SID: 143 and serial#: 50 and the Data Pump Worker process (DW01) has SID: 150 and serial#: 17. These details can be used to activate SQL tracing in SQL*Plus with DBMS_SYSTEM.SET_EV, e.g.:
-- In SQL*Plus, activate SQL tracing with DBMS_SYSTEM and SID/SERIAL# 
-- Syntax: DBMS_SYSTEM.SET_EV([SID],[SERIAL#],[EVENT],[LEVEL],'')

-- Example to SQL_TRACE Worker process with level 4 (Bind values):  
execute sys.dbms_system.set_ev(150,17,10046,4,'');

-- and stop tracing:
execute sys.dbms_system.set_ev(150,17,10046,0,''); 


-- Example to SQL_TRACE Master Control process with level 8 (Waits): 
execute sys.dbms_system.set_ev(143,50,10046,8,''); 

-- and stop tracing: 
execute sys.dbms_system.set_ev(143,50,10046,0,'');
The example output of the query above also shows that the Data Pump Master process (DM00) has OS process Id: 17292 and the Data Pump Worker process (DW01) has OS process Id: 17294. With this information, it is also possible to use 'oradebug' in SQL*Plus to activate SQL tracing for those processes, e.g.:
-- In SQL*Plus, activate SQL tracing with ORADEBUG and the SPID:

-- Example to SQL_TRACE Worker process with level 4 (Bind values):
oradebug setospid 17294
oradebug unlimit
oradebug event 10046 trace name context forever, level 4
oradebug tracefile_name

-- Example to SQL_TRACE Master Control process with level 8 (Waits):
oradebug setospid 17292 
oradebug unlimit 
oradebug event 10046 trace name context forever, level 8 
oradebug tracefile_name 


-- To stop the tracing:
oradebug event 10046 trace name context off
Either DBMS_SYSTEM.SET_EV or 'oradebug' can be used to create a Data Pump trace file.

7.3. Place complete database in SQL_TRACE with specific level. 
It is possible that there is not enough time to activate tracing on a specific Data Pump process because an error occurs at an early stage of the job, or that the the Data Pump process needs to be traced from the beginning. In those cases, the Event 10046 with the desired level has to be set in SQL*Plus at database level, and the Data Pump job has to be started afterwards. When the job completes, unset the event again.
Example:
-- Activate SQL tracing database wide,
-- Be careful: all processes will be traced! 
--
-- never do this on production unless a maintenance window
-- once issued in PROD you may not be able to stop if load is high
-- careful with directories filling up
--

CONNECT / as sysdba  
ALTER SYSTEM SET EVENTS '10046 trace name context forever, level 4';  


- Start the Export Data Pump or Import Data Pump job, e.g.:

% expdp system/manager DIRECTORY=my_dir DUMPFILE=expdp_f.dmp \  
LOGFILE=expdp_f.log TABLES=scott.emp


-- Unset event immediately after Data Pump job ends:

ALTER SYSTEM SET EVENTS '10046 trace name context off';
Be careful though: the steps above will result in SQL tracing on all processes, so only use this method if no other database activity takes place (or hardly any other activity), and when the Data Pump job ends relatively quickly.

7.4. Analyze the SQL trace files and create a TKPROF output file.
If the SQL trace files were created with level 1 or 4 then we are usually interested in the statements (and their bind variables). Example scenario: Data Pump aborts with a specific error. When investigating those kind of errors, it makes sense to compress the complete trace file and upload the compressed file.

If the SQL trace files were created with level 8 or 12 then we are usually interested in the timing of the statements (and their wait events). Example scenario: there is an presumptive performance issue during a Data Pump job. These SQL trace files can become very large and the tkprof output after analyzing the files are in most cases more meaningful. When investigating those kind of errors, it makes sense to upload the tkprof output files only.
Example:
-- create standard tkprof output files for Data Pump Master and Worker SQL traces:

% cd /oracle/admin/ORCL/BDUMP
% tkprof orcl_dm00_17292.trc tkprof_orcl_dm00_17292.out waits=y sort=exeela
% tkprof orcl_dw01_17294.trc tkprof_orcl_dw01_17294.out waits=y sort=exeela
For details about Event 10046 and tkprof, see also:
Note:21154.1 "EVENT: 10046 "enable SQL statement tracing (including binds/waits)"
Note:32951.1 "Tkprof Interpretation"





************************************************************************************
My notes

Run expdp
go to db and run following query

select to_char(sysdate,'YYYY-MM-DD HH24:MI:SS') "DATE", s.program, s.sid, s.serial#,   
       s.status, s.username, d.job_name, p.spid, p.pid    
  from v$session s, v$process p, dba_datapump_sessions d   
 where p.addr=s.paddr and s.saddr=d.saddr;   

get sid and serial for dw0,dw1,dm0 processes

and

execute in the following fashion to enable trace

1st number sid
2nd number serial
3rd number 10046
4th number level(4,8,12) 
execute sys.dbms_system.set_ev(1381,55489,10046,8,'');  
execute sys.dbms_system.set_ev(1897,18963,10046,8,'');  
execute sys.dbms_system.set_ev(864,34228,10046,8,'');  


after completion turn off trace by changing the 4th number to zero


Tuesday, August 25, 2015

Flushing a Single SQL Statement out of the Object Library Cache

It is well known that the entire shared pool can be flushed with a simple ALTER SYSTEM statement.
SQL> ALTER SYSTEM FLUSH SHARED_POOL;

System altered.

What if the execution plan of a single SQL statement has to be invalidated or flushed out of the shared pool so the subsequent query execution forces a hard parse on that SQL statement. Oracle 11g introduced a new procedure called PURGE in the DBMS_SHARED_POOL package to flush a specific object such as a cursor, package, sequence, trigger, .. out of the object library cache.
The syntax for the PURGE procedure is shown below.
procedure PURGE (
        name VARCHAR2, 
        flag CHAR DEFAULT 'P', 
        heaps NUMBER DEFAULT 1)

Explanation for each of the arguments is documented in detail in $ORACLE_HOME/rdbms/admin/dbmspool.sql file.
If a single SQL statement has to be flushed out of the object library cache, the first step is to find the address of the handle and the hash value of the cursor that has to go away. Name of the object [to be purged] is the concatenation of the ADDRESS and HASH_VALUE columns from the V$SQLAREA view. Here is an example:
SQL> select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID like '7yc%';

ADDRESS   HASH_VALUE
---------------- ----------
000000085FD77CF0  808321886

SQL> exec DBMS_SHARED_POOL.PURGE ('000000085FD77CF0, 808321886', 'C');

PL/SQL procedure successfully completed.

SQL> select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID like '7yc%';

no rows selected

Thursday, August 13, 2015

Transparent data encryption TDE



Creating TDE and wallet in 12c

ENCRYPTION_WALLET_LOCATION =
  (SOURCE = (METHOD = FILE)
   (METHOD_DATA =
    (DIRECTORY = /opt/app/oracle/admin/$ORACLE_UNQNAME/wallet)
   )
  )


--Create key store

ADMINISTER KEY MANAGEMENT CREATE KEYSTORE '/opt/app/oracle/admin/shwm/wallet/' IDENTIFIED BY "xxxxxxxx";


-- Open
ADMINISTER KEY MANAGEMENT SET KEYSTORE OPEN IDENTIFIED BY "xxxxxxxx";

-- Close
ADMINISTER KEY MANAGEMENT SET KEYSTORE CLOSE IDENTIFIED BY "xxxxxxxx";


--create masterkey

administer key management set key identified by "xxxxxxxx" with backup;



--Auto login

administer key management create auto_login keystore from keystore '/opt/app/oracle/admin/shwm/wallet/' identified by "xxxxxxxx";



--encryption tablespace online 12c

alter system set compatible='12.2.0.0.0' scope=spfile;

select tablespace_name,encrypted from dba_tablespaces where tablespace_name='TXT';

ALTER TABLESPACE txt ENCRYPTION ONLINE USING 'AES256' ENCRYPT;


******************************************************************************
TDE we have column level encryption starting from 10G and tablespace level from 11G

      select * from v$encryption_wallet


For tablespace level

We cannot convert an existing tablespace to encrypted tablespace.

Onley way to do is creating new encrypted tablespace and transferring the objects either by ctas or expdp

encrypted tablespaces can be created by using create clause at the time of TBS creation.



Column level encryption can be done on existing columns

Draw back is performance overhead and range scans are not possible if index is present on encrypted column. For example using like operator in query.

For either of these you need database wallet installed and configured.

If you restart db you need to open wallet or else none of the data can be pulled out.

For reference on wallet creation check here

http://dbasravan.blogspot.com/2015/03/wallet.html


Sample commands for creating encrypted tablespace and encrypting columns are listed below

CREATE TABLE tde_test (
  id    NUMBER(10),
  data  VARCHAR2(50) ENCRYPT
)
TABLESPACE tde_test;
SELECT * FROM dba_encrypted_columns;

CREATE TABLESPACE ts_tde
DATAFILE '/u01/app/oracle/oradata/ora11g/ts_tde01.dbf' 
  SIZE 20m ATOEXTEND ON NEXT 5m
EXTENT MANAGEMENT LOCAL
SEGMENT SPACE MANAGEMENT AUTO
ENCRYPTION USING '3DES168'
DEFAULT STORAGE (ENCRYPT);





SELECT tablespace_name , encrypted 
FROM dba_tablespaces;

TABLESPACE_NAME                ENC
------------------------------ ---
SYSTEM                         NO
SYSAUX                         NO
UNDOTBS1                       NO
TEMP                           NO
USERS                          NO
EXAMPLE                        NO
TS_TDE                         YES

SELECT t.name , e.encryptionalg , e.encryptedts
FROM v$tablespace t , v$encrypted_tablespaces e
WHERE t.ts# = e.ts#;

NAME                           ENCRYPT ENC
------------------------------ ------- ---
TS_TDE                         3DES168 YES



alter table cc add (SOCIAL_SEC_NO varchar2(9) encrypt using 'AES128');


alter table cc modify (cc_no encrypt using 'AES128');






Encryption Salt

Consider this scenario. An intruder has stolen the backups of the medical records database containing the patient information; but since he does not have the wallet password he will not be able to open the wallet and see the clear text values. He can, however, still read the raw data files and see the encrypted values. This action by itself does not reveal the sensitive data, but it may reveal a pattern which might help the intruder. For instance, assume the intruder knows a specific patient and the diagnosis made on her - cancer. From the data files, he can see the encrypted value of this diagnosis code "cancer". Then he can scan the file to see the identical values in other records, which will help him know who else has the same diagnosis code, i.e. cancer. So even though he may not know the actual value, he has learned who all have the same diagnosis by establishing a pattern. Similarly by knowing some key patients, he can learn a lot about other patients by this pattern analysis. This may not be acceptable as a security standard.

To prevent such a possibility, you can add some "salt". This is merely a random value added in the process to make the encrypted values different even if the clear text values are the same. In many cases, this is actually desirable; hence TDE adds a salt to the value by default. Now even if two patients have the same diagnosis code, the encrypted value stored in the database will be very different.

In some cases, you may not want to add a salt. In that case, you can override the default by specifying the NO SALT clause while defining encryption. For instance, while modifying the column for encryption, you can use:

alter table cc modify (cc_no encrypt using 'AES256' no salt);

The "NO SALT" clause does not add salt to the clear text value before encrypting. To remove salt from a previously encrypted table, you can issue:

alter table cc modify (cc_no encrypt no salt)

If you have defined a column as encrypted with salt, you can't create an index on it. If you do attempt it, you will get the following error:

ORA-28338: cannot encrypt indexed column(s) with salt

You can remove salt by the statements shown above. Note that the removal of salt actually triggers the r-encryption of the column that may generate a large amount of undo and redo.

***********************
Note: check below to verify key in all instances of RAC is same
select ts#, masterkeyid, utl_raw.cast_to_varchar2( utl_encode.base64_encode('01'||substr(masterkeyid,1,4))) || utl_raw.cast_to_varchar2( utl_encode.base64_encode(substr(masterkeyid,5,length(masterkeyid)))) masterkeyid_base64 FROM v$encrypted_tablespaces; 

Heap size xxxx exceeds notification threshold (8192K)

Set  _kgl_large_heap_warning_threshold  to a reasonable high value or zero to prevent these warning messages. Value needs to be set in bytes.

If you want to set this to 8192 (8192 * 1024) and are using an spfile:

(logged in as "/ as sysdba")

SQL> alter system set "_kgl_large_heap_warning_threshold"=12288000 scope=spfile ;

SQL> shutdown immediate
SQL> startup

If using an "old-style" init parameter,

Edit the init parameter file and add

_kgl_large_heap_warning_threshold=8388608




In my case heap size reported in error is 11677K

so multiplied to 12000*1024=12288000

Wednesday, August 12, 2015

Auto extend off all datafiles in database at once

select 'ALTER database datafile '''||FILE_NAME||''' autoextend off;'  from dba_data_files where tablespace_name not in ('SYSTEM','SYSAUX','UNDOTBS1','USERS','UNDOTBS2') and AUTOEXTENSIBLE='YES';

run the output :)

Friday, August 7, 2015

Drop all objects under schema

select 'drop  '||object_type||' '||owner||'.'||object_name||';' from dba_objects where OWNER= 'CAARS_DM_USER';

for sdwh refresh

select 'drop  '||object_type||' '||owner||'.'||object_name||';' from dba_objects where OWNER in ('CAARS_DM_USER','DM_PWR_USER','DM_USER','DPMS_DM_USER','DW_USER','EDQ','ETL','HARTS_DM_USER','HDD_DM_USER','HMA_CONTROL','HMA_CRM_USER','HMA_DM_INCENTIVES','HMA_DM_MARKETING','HMA_DM_SALES','HMA_DM_SERVICE','HMA_DM_SSBI','HMA_HIST','HMA_LOOKUP','HMA_MARKETING','HMA_ODS','HMA_PROD','HMA_STAGE','HMA_STG_MV_USER','INCENT_DM_USER','LMRS_DM_USER','MKTG_DM_USER','QIS_DM_USER','SALES_DM_USER','SCORE_DM_USER','SED_DM_USER','SSBI_DM_USER','VOC_DM_USER','WIS_DM_USER'
);

Featured Post

Apply Patch 22191577 latest GI PSU to RAC and DB homes using Opatch auto or manual steps

Patch 22191577: GRID INFRASTRUCTURE PATCH SET UPDATE 11.2.0.4.160119 (JAN2016) Unzip the patch 22191577 Unzip latest Opatch Version in or...