Monday, September 15, 2014

Oracle ASM: ASMCMD Commands List with Description

ASMCMD (ASM Command Line UtilityReference


Commands List

1. cd:  It changes the current directory to the specified directory.

2. cp: It enables you to copy files between ASM disk groups on a local instance and remote instances.

3. du: It displays the total disk space occupied by ASM files in the specified ASM directory and all of its subdirectories, recursively.

4. exit: It exits ASMCMD.

5. find: It lists the paths of all occurrences of the specified name (with wildcards) under the specified directory.

6. help: It displays the syntax and description of ASMCMD commands.

7. ls: It lists the contents of an ASM directory, the attributes of the specified file, or the names and attributes of all disk groups.

8 lsct: It lists information about current ASM clients.

9. lsdg: It lists all disk groups and their attributes.

10. lsdsk: It lists disks visible to ASM.

11. md_backup: It creates a backup of all of the mounted disk groups.

12. md_restore: It restores disk groups from a backup.

13. mkalias: It creates an alias for system-generated filenames.

14. mkdir: It creates ASM directories.

15. pwd: It displays the path of the current ASM directory.

16. remap: It repairs a range of physical blocks on a disk.

17. rm: It deletes the specified ASM files or directories.

18. rmalias: It deletes the specified alias, retaining the file that the alias points to.

--------
-- kfed tool From Unix Prompt for reading ASM disk header.
kfed read /dev/vx/rdsk/fra1dg/fra1

Oracle ASM: CREATE and Manage Tablespaces and Datafiles on ASM

CREATE and Manage Tablespaces and Datafiles on ASM


1.       Create New Tablespace on ASM Diskgroup
CREATE TABLESPACE TEST_TBS DATAFILE '+DATA' SIZE 100M AUTOEXTEND ON;

2.       Add new Datafile in Tablespace on ASM Diskgroup
ALTER TABLESPACE TEST_TBS ADD DATAFILE '+DATA' SIZE 100M;

3.       Alter Datafile in Tablespace in ASM
ALTER DATABASE DATAFILE '+DATA/rpst12/datafile/qnamicrmd.259.668957419' RESIZE 150M;

4.       Create new Diskgroup with Redundancy
create diskgroup DATADG external redundancy disk '/dev/mapper/mpathab1'
ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';

5.       Dynamic SQL to add newly added disks into Diskgroup
select 'alter diskgroup DATA1 add disk ''' || PATH || ''';' from v$asm_disk
where GROUP_NUMBER=0 and rownum<=&Num_Disks_to_add;

6.       Dynamic SQL to add newly added disks into Diskgroup
select 'alter diskgroup DATA add disk ''' || PATH || ''';' from v$asm_disk
where GROUP_NUMBER=0 and rownum<=&Num_Disks_to_add;

7.       Remove ASM header
select 'dd if=/dev/zero of=''' ||PATH||''' bs=8192 count=100' from v$asm_disk
where GROUP_NUMBER=0;

8.       Sql to find candidate disks
SELECT
    NVL(a.name, '[CANDIDATE]')      disk_group_name
  , b.path                          disk_file_path
  , b.name                          disk_file_name
  , b.failgroup                     disk_file_fail_group
FROM
    v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)
ORDER BY
    a.name;

Oracle ASM: Performance Tuning Analysis

Performance Tuning Analysis

There are 100 of ways to tune database.

1.       Only Performance Statistics (Time in Hundredseconds)
col READ_TIME format 9999999999.99
col WRITE_TIME format 9999999999.99
col BYTES_READ format 99999999999999.99
col BYTES_WRITTEN  format 99999999999999.99
select name, STATE, group_number, TOTAL_MB, FREE_MB,READS, WRITES, READ_TIME,
 WRITE_TIME, BYTES_READ, BYTES_WRITTEN, REPAIR_TIMER,MOUNT_DATE
from v$asm_disk order by group_number, name;


2.       Check the Num of Extents in use per Disk inside one Disk Group.
select max(substr(name,1,30)) group_name, count(PXN_KFFXP) extents_per_disk,
DISK_KFFXP, GROUP_KFFXP from x$kffxp, v$ASM_DISKGROUP gr
where GROUP_KFFXP=&group_nr and GROUP_KFFXP=GROUP_NUMBER
group by GROUP_KFFXP, DISK_KFFXP order by GROUP_KFFXP, DISK_KFFXP;


3.       Find The File distribution Between Disks
SELECT * FROM v$asm_alias  WHERE  name='PWX_DATA.272.669293645';

SELECT GROUP_KFFXP Group#,DISK_KFFXP Disk#,AU_KFFXP AU#,XNUM_KFFXP Extent#
FROM   X$KFFXP WHERE  number_kffxp=(SELECT file_number FROM v$asm_alias
WHERE name='PWX_DATA.272.669293645');

Or

SELECT GROUP_KFFXP Group#,DISK_KFFXP Disk#,AU_KFFXP AU#,XNUM_KFFXP Extent#
FROM X$KFFXP WHERE  number_kffxp=&DataFile_Number;

Or

select d.name, XV.GROUP_KFFXP Group#, XV.DISK_KFFXP Disk#,
XV.NUMBER_KFFXP File_Number, XV.AU_KFFXP AU#, XV.XNUM_KFFXP Extent#,
XV.ADDR, XV.INDX, XV.INST_ID, XV.COMPOUND_KFFXP, XV.INCARN_KFFXP,
XV.PXN_KFFXP, XV.XNUM_KFFXP,XV.LXN_KFFXP, XV.FLAGS_KFFXP,
XV.CHK_KFFXP, XV.SIZE_KFFXP from v$asm_disk d, X$KFFXP XV
where d.GROUP_NUMBER=XV.GROUP_KFFXP and d.DISK_NUMBER=XV.DISK_KFFXP
and number_kffxp=&File_NUM order by 2,3,4;


4.       List the hierarchical tree of files stored in the diskgroup
SELECT concat('+'||gname, sys_connect_by_path(aname, '/')) full_alias_path FROM
(SELECT g.name gname, a.parent_index pindex, a.name aname,
a.reference_index rindex FROM v$asm_alias a, v$asm_diskgroup g
WHERE a.group_number = g.group_number)
START WITH (mod(pindex, power(2, 24))) = 0


CONNECT BY PRIOR rindex = pindex;

Oracle ASM :How to Manage DiskGroup in ASM

How to Manage DiskGroup in ASM


How to Add/Remove/Manage ASM Configuration

1.       Use the following syntax to add configuration information about an existing ASM instance:
srvctl add asm -n node_name -i +asm_instance_name -o oracle_home

2.       Use the following syntax to remove an ASM instance:
srvctl remove asm -n node_name [-i +asm_instance_name]

3.       Use the following syntax to enable an ASM instance:
srvctl enable asm -n node_name [-i ] +asm_instance_name

4.       Use the following syntax to disable an ASM instance:
srvctl disable asm -n node_name [-i +asm_instance_name]

5.       Use the following syntax to start an ASM instance:
srvctl start asm -n node_name [-i +asm_instance_name] [-o start_options]

6.       Use the following syntax to stop an ASM instance:
srvctl stop asm -n node_name [-i +asm_instance_name] [-o stop_options]

7.       Use the following syntax to show the configuration of an ASM instance:
srvctl config asm -n node_name

8.       Use the following syntax to obtain the status of an ASM instance:
srvctl status asm -n node_name

Note: For all of the SRVCTL commands in this section for which the option is not required, if you do not specify an instance name, then –i the command applies to all of the ASM instances on the node.


Some Useful Queries

Note: While adding or dropping disks will initiate a rebalance of the data on the disks. The status of these processes can be shown by selecting from v$asm_operation.

1.       Quering ASM Disk Groups
col name format a25
col DATABASE_COMPATIBILITY format a10
col COMPATIBILITY format a10
select * from v$asm_diskgroup;
Or

select name, state, type, total_mb, free_mb from v$asm_diskgroup;

2.       Querying ASM Disks
col PATH format a55
col name format a25
select name, path, group_number, TOTAL_MB, FREE_MB, READS, WRITES, READ_TIME,
WRITE_TIME from v$asm_disk order by 3,1;

Or

col PATH format a50
col HEADER_STATUS  format a12
col name format a25
--select INCARNATION,
select name, path, MOUNT_STATUS,HEADER_STATUS, MODE_STATUS, STATE, group_number,
OS_MB, TOTAL_MB, FREE_MB, READS, WRITES, READ_TIME, WRITE_TIME, BYTES_READ,
BYTES_WRITTEN, REPAIR_TIMER, MOUNT_DATE, CREATE_DATE from v$asm_disk;


Oracle ASM: How to Create and Modify Disk Group

How to Create and Modify Disk Group

1.       Create New Disk Group
create diskgroup DATA external redundancy disk '/dev/oracleasm/disks/ASMDATA01'
ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';

alter diskgroup DATA check all;

If RAC then on second ASM Instance (Example ASM2):

alter diskgroup DATA mount;

2.       Add new disk in exiting diskgroup
alter diskgroup DATA add disk '/dev/oracleasm/disks/ASMDATA02';

3.       Add several disks with a wildcard:
alter diskgroup DATA add disk '/dev/oracleasm/disks/ASMDATA*';

4.       Remove a disk from a diskgroup:
alter diskgroup DATA drop disk 'DATA_0001';

5.       Drop the entire DiskGroup
drop diskgroup DATA including contents;

6.       How to DROP the entire DiskGroup when it is in NOMOUNT Status
Generate the dd command which will reset the header of all the disks belong the GROUP_NUMBER=0!!!!
select 'dd if=/dev/zero of=''' ||PATH||''' bs=8192 count=100' from v$asm_disk
where GROUP_NUMBER=0;

select * from v$asm_operation;

7. After adding new disk or dropping an existing disk, rebalance the data across the all disks by below command.
alter diskgroup DATA rebalance power 10;




--When a new diskgroup is created, it is only mounted on the local instance,
--and only the instance-specific entry for the asm_diskgroups parameter is updated.
--By manually mounting the diskgroup on other instances, the asm_diskgroups parameter
--on those instances are updated.

--on +ASM1 :
create diskgroup FRA1 external redundancy disk '/dev/vx/rdsk/fradg/fra1'
ATTRIBUTE 'compatible.rdbms' = '11.1', 'compatible.asm' = '11.1';

--on +ASM2 :
alter diskgroup FRA1 mount;


Thursday, September 11, 2014

Oracle 11.2 Datapump ORA-31693 ORA-02354 ORA-01555

Oracle 11.2 Datapump ORA-31693 ORA-02354 ORA-01555

Error Description:

ORA-31693: Table data object "TRIP"."BOOKINGCHANGELOG" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 60 with name "_SYSSMU60_502175397$" too small

Reason:

There can be multiple reasons for it.
1.       MSSM (Manual Segment Space Management)
Reference link Metalink Note 800386.1 ORA-1555 - UNDO_RETENTION is silently ignored if the LOB resides in a MSSM tablespace
ORA-1555/ORA-22924 may occur when accessing LOB columns, even when the LOB RETENTION seems to be sufficient. This may occur when LOB column resides in a MSSM (Manual Segment Space Management) tablespace.
2.       Export has been started with Flashback_TIME option to get consistent copy of data which makes database to keep data in UNDO segments for longer duration. In other words, The reason for this is the database parameter UNDO_RETENTION is set to low compared to the time set with FLASHBACK_TIME.


Solution:

1.       Increase UNDO_RETENTION parameter to a value sufficient to get all info required by the flashback time.

2.       Also, consider increasing undo tablespace size.

Oracle 11.2 Datapump ORA-31617, ORA-19505, ORA-27037

ORACLE 11.2. Datapump ORA-31617 ORA-19505 ORA-27037

Scenario:

In RAC, when you start export with PARALLEL parameter by using SCAN-NAME as connection string then oracle tries to connect to both the nodes and tries to create/find dump files to both nodes.

After starting export, the dump files get created on the node from where the export command was executed but the connection which has been made with second node after some time and tries to find the file to update, then we see this error.

ORA-31693: Table data object "TRIP"."TRIP_15" failed to load/unload and is being skipped due to error:
ORA-31617: unable to open dump file "/u01/expdp/TRIP_test_03.dmp" for write
ORA-19505: failed to identify file "/u01/expdp/TRIP_test_03.dmp"

ORA-27037: unable to obtain file status

To troubleshoot this error, execute SELECT on gv$session view:

SID       MACHINE                                INST_ID
------  ---------------------------------------  --------
125     rac-01oracle@rac-01 (DM00)              1  
24      rac-01oracle@rac-01 (DW02)              1   
134     rac-01oracle@rac-01 (DW00)              1   
18      rac-02oracle@rac-02 (DW00)              2
2252    rac-02oracle@rac-02 (DW01)              2    



This output shows that, connections are distributed to both the nodes, hence datapump error out.

There are below three options to avoid this error:
1. Create a directory object on clustered file system

2. Export without using parallel clause.
expdp dumpfile=TRIP_%u.dmp logfile=test.log directory=temp_expdp schemas=test

3. Use CLUSTER option in expdp CLI. This option will to constrain the job to run on the instance where it is started.
expdp dumpfile=TRIP_%u.dmp logfile=test.log directory=temp_expdp schemas=test cluster=n



Wednesday, September 10, 2014

Using RMAN Incremental Backup to SYNC Standby Database

Using RMAN Incremental Backup to SYNC Standby Database

Step 1: Create the Incremental Backup
Create the needed incremental backup at the source database, using BACKUP with the INCREMENTAL FROM SCN clause.
Assume that the incremental backup to be used in updating the duplicate database is to be created on disk, with the filenames for backup pieces determined by the format /tmp/incr_for_standby/bkup_%U.

RMAN> BACKUP DEVICE TYPE DISK INCREMENTAL FROM SCN 750983 DATABASE
     FORMAT '/tmp/incr_for_standby/bkup_%U';

Step 2: Make the Incremental Backup Accessible at the Standby Database
Make the backup pieces containing the incremental backup available in some directory accessible on the system containing the standby database. For this example, assume that the destination directory is called /standbydisk1/incrback/ and ensure that it contains nothing besides the incremental backups from Step 1.

Step 3: Catalog the Incremental Backup Files at the Standby Database
Use the RMAN CATALOG command to register the backup sets in the RMAN repository at the duplicate. With an RMAN client connected to the standby database and the recovery catalog (if you use one at the standby), mount the standby and run the following command:
RMAN> CATALOG START WITH '/standbydisk1/incrback/';
The backups are now available for use in recovery of the standby.

Step 4: Apply the Incremental Backup to the Standby Database
Use the RMAN RECOVER command with the NOREDO option to apply the incremental backup to the standby database. All changed blocks captured in the incremental backup are updated at the standby database, bringing it up to date with the primary database. With an RMAN client connected to the standby database, run the following command:
RMAN> RECOVER DATABASE NOREDO;
run
{ allocate channel aux_ch1 device type disk format '/backup/oracle/PWHGVA20/rman/incr/bkup_%U';
    recover database noredo;
 }
Follow this step-by-step procedure to roll forward a physical standby database that has fallen far behind the primary database.
Note: The steps in this section can also be used to resolve problems if a physical standby database has lost or corrupted archived redo data or has an unresolveable archive gap.
select 'backup incremental from scn '||current_scn||'  database format "/backup/oracle/PWHRMN20/rman/INC_FOR_STANDBY_%U.bkp";' from v$database;

1.       On the standby database, query the V$DATABASE view and record the current SCN of the standby database:
SELECT CURRENT_SCN FROM V$DATABASE; CURRENT_SCN -----------      233995
2.       Stop Redo Apply on the standby database:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
3.       Connect to the primary database as the RMAN target and create an incremental backup from the current SCN of the standby database that was recorded in step 1:
RMAN> BACKUP INCREMENTAL FROM SCN 233995 DATABASE FORMAT '/tmp/ForStandby_%U' tag 'FOR STANDBY';

Note:
RMAN does not consider the incremental backup as part of a backup strategy at the source database. Hence:
The backup is not suitable for use in a normal RECOVER DATABASE operation at the source database
The backup is not cataloged at the source database
The backup sets produced by this command are written to the /dbs location by default, even if the flash recovery area or some other backup destination is defined as the default for disk backups.
You must create this incremental backup on disk for it to be useful. When you move the incremental backup to the standby database, you must catalog it at the standby as described in Oracle Database Backup and Recovery Advanced User's Guide. Backups on tape cannot be cataloged.

   4. Transfer all backup sets created on the primary system to the standby system (note that there may be more than one backup file created):

      SCP /tmp/ForStandby_* standby:/tmp

   5. Connect to the standby database as the RMAN target, and catalog all incremental backup pieces:

      RMAN> CATALOG START WITH '/tmp/ForStandby_';

   6. Connect to the standby database as the RMAN target and apply incremental backups

      RMAN> RECOVER DATABASE NOREDO;

   7. Remove the incremental backups from the standby system:

      RMAN> DELETE BACKUP TAG 'FOR STANDBY';

   8. Manually remove the incremental backups from the primary system:

      rm /tmp/ForStandby_*

   9. Start Redo Apply on the physical standby database:

Oracle Database Space Usage

Oracle Database Space Usage

1.       Whole Database Space Usage 
SELECT   ROUND (SUM (used.BYTES) / 1024 / 1024 / 1024)
    || ' GB' "Database Size",
         ROUND (SUM (used.BYTES) / 1024 / 1024 / 1024)
    -  ROUND (free.p / 1024 / 1024 / 1024)
    || ' GB' "Used space",
    ROUND (free.p / 1024 / 1024 / 1024) || ' GB' "Free space"
FROM (SELECT BYTES
         FROM v$datafile
     UNION ALL
     SELECT BYTES
         FROM v$tempfile
     UNION ALL
     SELECT BYTES
         FROM v$log) used,
    (SELECT SUM (BYTES) AS p
         FROM dba_free_space) free
GROUP BY free.p;

2.       Datafiles-wise Usage
SELECT   SUM (a.BYTES) / 1024 / 1024 / 1024 allocated_bytes,
         SUM (b.free_bytes) / 1024 / 1024 / 1024
    FROM dba_data_files a,
         (SELECT   file_id, SUM (BYTES) free_bytes
              FROM dba_free_space b
          GROUP BY file_id) b
   WHERE a.file_id = b.file_id
ORDER BY a.tablespace_name;

3.       How to get Schema Size
set linesize 150
set pagesize 5000
col owner for a15
col segment_name for a30
col segment_type for a20
col TABLESPACE_NAME for a30
clear breaks
clear computes
compute sum of SIZE_IN_GB on report
break on report
select OWNER,sum(bytes)/1024/1024/1000 "SIZE_IN_GB" from dba_segments group by owner order by owner;


4.       How to get Particular Table Size
SELECT   segment_name table_name,
         SUM (BYTES) / (1024 * 1024) table_size_meg
    FROM dba_extents
   WHERE segment_type = 'TABLE' and segment_name ='<table-name>'
GROUP BY segment_name;

5.       How to get Size of all tables of particular schemas
SELECT   segment_name table_name,
         SUM (BYTES) / (1024 * 1024) table_size_meg
    FROM dba_extents
   WHERE segment_type = 'TABLE' and owner='TRIP'
GROUP BY segment_name;


How to Perform Manual Failover in Oracle

Oracle Database Failover Steps (Manual)


Purporse: When primary site is unavailable/crashed/lost/theft/damaged then there is no option except performing failover to standby site.

Note - Do not try on production on testing basis because once you performed below steps then primary site will not be used i.e. you can not perform switchover operations later.

Failover Steps (Manual)
STEP 1 :
alter session set tracefile_identifier='FAILOVER_STBY_<Date-Time>';
STEP 2 :
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;

If above command gets failed then try below command (otherwise skip)
ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE;
STEP 3 :
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;
STEP 4 :

ALTER DATABASE OPEN;

How to Perform Manual Switchover in Dataguard

How to Perform Manual Switchover in Oracle 


PRIMARY DATABASE OPERATIONS

1.       Check if Switchover can take place or not. The SWITCHOVER_STATUS should have the value of TO STANDBY or SESSIONS ACTIVE. If not then redo transport is not functioning properly and you will not be able to perform the switchover. If you received SESSIONS ACTIVE in the prior step you might have connections that may prevent the switchover. Verify the active connections by querying V$SESSION.

2.       Below Select statement SHOULD (Mandatory) RETURN "TO_STANDBY", If not FIX ISSUES first, WARNING !!!!
select name, open_mode, database_role, switchover_status from v$database;

3.       Trace File Settings - Change Date-Time to current time
alter session set tracefile_identifier='SWITCHOVER_<Date-Time>';

4.       Execute to convert primary DB to Standby Database
alter database commit to switchover to physical standby with session shutdown;

5.       Check Alert logfile for below text :
"Switchover: Primary controlfile converted to standby controlfile succesfully.
“Switchover: Complete - Database shutdown required"

6.       Start Primary Database as standby by executing below commands
shutdown immediate;
startup nomount;
alter database mount standby database;

7.       Defer the remote archive destination on the old primary( new standby)
alter system set log_archive_dest_state_2=defer;

8.       Check Database Role
select DATABASE_ROLE from v$database;


STANDBY DATABASE OPERATIONS


1.       WARNING !!!! Below Select statement SHOULD (Mandatory) RETURN "TO_PRIMARY", if not FIX ISSUES first.
select name, open_mode, database_role, switchover_status from v$database;

2.       Trace File Settings - Change Date-Time to current time
alter session set tracefile_identifier='SWITCHOVERTEST_STBY_<Date-Time>';
alter database commit to switchover to primary with session shutdown;
alter database open;

3.       Check alert.log file of Standby Database for text:
Switchover: Complete - Database mounted as primary"

4.       Enable Dest-2, again.
alter system set log_archive_dest_state_2=enable;


5.       Execute on both Primary and Standby Databases
select instance_name,status, database_status, instance_role,active_state from v$instance;

6.       Check Database Role
select DATABASE_ROLE from v$database;

7.       Run on NEW standby Database (Former primary)
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE  THROUGH ALL SWITCHOVER DISCONNECT  USING CURRENT LOGFILE;

8.       Applying Logs without opening the Database (only in Mount State)
alter database recover managed standby database disconnect from session; 

9.       Switch logfile on New Primary DB 
ALTER SYSTEM SWITCH LOGFILE;

10. Match sequence to know last applied log file on standby database with one of below Select statements.  If sequence number is not same, then redo log is not functioning properly and switchover will not be successful. FIX REDO ISSUES FIRST, WARNING!!!!!!!!

select max(sequence#) from v$archived_log;

select thread#, max(sequence#) "Last Primary Seq Generated" from v$archived_log val, v$database vdb 
where val.resetlogs_change# = vdb.resetlogs_change# group by thread# order by 1;

select thread#, max(sequence#) "Last Standby Seq Applied" from v$archived_log val, v$database vdb 
where val.resetlogs_change# = vdb.resetlogs_change# and val.applied='YES' group by thread# order by 1;