Wednesday, August 20, 2025

Steps for Comparing 2 AWR reports from different databases

Firstly there is no direct way/option of comparing 2 complete different oracle databases but however there are certain steps need to perform to achieve this.

To transport AWR data from one system to another, first export the AWR data from the database on the source system, and then import it into the database on the target system so that you have both awr info in single DB to compare

High Level Steps

  1. Extract the AWR data from the SOURCE database (@?/rdbms/admin/awrextr.sql )

  2. Transfer the data dumps to TARGET machine/server

  3. Load the AWR data to TARGET DATABASE (using @?/rdbms/admin/awrload.sql)

  4. generate an AWR Compare Periods report (using @?/rdbms/admin/awrddrpt.sql )


Step 1: Prepare Directory

Create a directory or use the existing one with write permissions to it

Step 2: Export AWR Data from Source Database

@$ORACLE_HOME/rdbms/admin/awrextr.sql

You will be prompted for:
(1) database id
(2) snapshot range to extract
(3) name of directory object
(4) name of dump file
(5) export sql monitor data or not

Step 3: Transfer Export Dump File to Target Server

Copy .dmp file to target server/ location

Step 4: Prepare Directory Object in Target Database
Create a directory or use the existing one where you need to place the source dump

Step 5: Import AWR Data into Target Database
Run the “awrload.sql” script as SYS in the target database to load the data, if you are using PDB then set container to that PDB and run the script below

@$ORACLE_HOME/rdbms/admin/awrload.sql

Step 6: Generate Compare Period Report in Target Database

@$ORACLE_HOME/rdbms/admin/awrddrpt.sql

  • All Options are all Self explanatory



Monday, August 18, 2025

SQL Developer Extension for VS Code


Hi, Recently i came across a new wonderful feature introduced by oracle “Oracle SQL Developer for VS Code”, Yes oracle in early 2024 released an extension for VS Code where it provides the ability to execute your SQL queries and scripts and perform PL/SQL development and its pretty awesome and here is how you can install


Download Visual Studio Code (VS code) in case if you dont have it from here

Oracle Database versions supported to use this extension : 11g to 23


To install click on “Extension” on Left side icon


Search for ‘Oracle’ and install the extension as shown below



DB connection setup


Once installed a new ICON is shown like below






From the Connection type choose “TNS” and locate the TNS file in case if you want to use TNS entries


Go to “file” → “Preferences” → “Settings” → “Database Connections” 


And update the path of the TNS file of your 


If you encounter any issue using TNS connection then use “Connect Identifier” option in Type and use the TNS alias name in "Connect identifier"



Once you setup DB connection then you can browse all DB objects in Tree Navigation and run select queries against the DB

Read More:

https://www.oracle.com/database/sqldeveloper/vscode/

https://marketplace.visualstudio.com/items?itemName=Oracle.sql-developer






Sunday, August 17, 2025

Convert to SNAPSHOT Standby Shell script

########################################################################### #!/bin/bash # #Purpose : This Script will put Phy Standby to SNAPSHOT Stanby #Usage : ./convert_standby_broker.sh [to_snapshot|to_physical] #

#Author : Chandra B ########################################################################### #Version - Initial Draft set -e #Configuration ORACLE_SID="db_sid" PRIMARY_DG_NAME="your_primary_dg_name" STANDBY_DG_NAME="db_name" DB_USER="sys" DB_PASSWORD="your_sys_password" ##DG_BROKER_CONFIG="/path/to/dg_broker_config_files" # Adjust to your DGMGRL config path EMAIL_TO="TO_EMAIL" EMAIL_SUBJECT="Oracle Snapshot Standby Conversion Alert" EMAIL_FROM="TO_EMAIL" if [[ -f /home/oracle/CB/Shell_convert_snapshot_standby.log ]]; then rm /home/oracle/CB/Shell_convert_snapshot_standby.log fi LOG_FILE="/home/oracle/CB/Shell_convert_snapshot_standby.log" #Set Oracle DB environment export ORACLE_HOME=/u01/app/rdbms/product/19c export PATH=$ORACLE_HOME/bin:$PATH export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH export ORACLE_SID #Function to log messages log_message() { echo -e "\n$(date '+%Y-%m-%d %H:%M:%S'): 1" |tee −a "{LOG_FILE}" } #Function to send email send_email() { local message="$1" log_message "Email sent: ${message}" cat ${LOG_FILE} | mailx -s "${EMAIL_SUBJECT}" -r "${EMAIL_FROM}" "${EMAIL_TO}" } #Function to check for errors and exit check_error() { if [ $1 -ne 0 ]; then log_message "Error: $2" send_email "Error during snapshot standby conversion: $2" exit 1 fi } check_prompt() { while true do (1) prompt and read command line argument read -p "Do you want to proceed ...?? " answer (2) handle the input we were given case $answer in [yY]* ) echo -e "\nEntered YES, Continuing script........\n" break;; [nN]* ) echo -e "\nEntered NO ........" echo -e "\nexiting script ........\n" exit;; ) echo "just enter Y or N, please.";; esac done }

run_sql() { local sql="$1" sqlplus -s "/ as sysdba" <<-EOSQL SET PAGESIZE 0 FEEDBACK OFF VERIFY OFF HEADING OFF ECHO OFF TIMING OFF $sql EXIT; EOSQL } #Pre-conversion checks log_message "Starting pre-conversion checks..." #Check if database is a physical standby check_standby_role() { local role STANDBY_ROLE=$(run_sql "select database_role from v$database;") echo "$STANDBY_ROLE" } STANDBY_ROLE=$(echo "${STANDBY_ROLE}" | tr -d '[:space:]') echo ${STANDBY_ROLE} if [ "${STANDBY_ROLE}" != "PHYSICALSTANDBY" ]; then log_message "Error: Database is not a physical standby (Role: ${STANDBY_ROLE})" send_email "Conversion failed: Database is not a physical standby (Role: ${STANDBY_ROLE})" exit 1 fi log_message "Check passed: Database is a physical standby." #Check if Data Guard Broker is enabled BROKER_STATUS=$(dgmgrl -silent / "show configuration;" | grep -A 1 "Configuration Status" |grep -v "Configuration Status" | awk '{print $1}') if [ "${BROKER_STATUS}" != "SUCCESS" ]; then log_message "Error: Data Guard Broker configuration is not healthy (Status: ${BROKER_STATUS})" send_email "Conversion failed: Data Guard Broker configuration is not healthy (Status: ${BROKER_STATUS})" exit 1 fi log_message "Check passed: Data Guard Broker is healthy." #Check if flashback database is enabled (required for snapshot standby) flashback_status () { SET HEADING OFF Timing off FEEDBACK OFF; FLASHBACK_STATUS=$(sqlplus -s / as sysdba <<EOF SELECT FLASHBACK_ON FROM V$DATABASE; EXIT; EOF )

FLASHBACK_STATUS=$(echo "${FLASHBACK_STATUS}" | tr -d '[:space:]') if [ "${FLASHBACK_STATUS}" != "YES" ]; then log_message "Error: Flashback database is not enabled" send_email "Conversion failed: Flashback database is not enabled on the standby" exit 1 fi

log_message "Check passed: Flashback database is enabled." } flashback_status(); #Check if standby is in sync with primary LAG_SECONDS=$(sqlplus -s / as sysdba <<EOF SET HEADING OFF Timing off FEEDBACK OFF; SELECT VALUE FROM V$DATAGUARD_STATS WHERE NAME = 'apply lag'; EXIT; EOF ) LAG_SECONDS=$(echo "${LAG_SECONDS}" | tr -d '[:space:]') if [ -z "${LAG_SECONDS}" ] || [ "${LAG_SECONDS}" -gt 300 ]; then log_message "Error: Standby is not in sync with primary (Lag: ${LAG_SECONDS} seconds)" send_email "Conversion failed: Standby is not in sync with primary (Lag: ${LAG_SECONDS} seconds)" exit 1 fi log_message "Check passed: Standby is in sync with primary (Lag: ${LAG_SECONDS} seconds)." #Perform conversion to snapshot standby using DGMGRL check_prompt; convert_to_snapshot() { log_message "Starting conversion to snapshot standby..." ###---CCONVERT DATABASE ${STANDBY_DG_NAME} TO SNAPSHOT STANDBYYYYYYY; dgmgrl / < /tmp/dgmgrl_convert.log 2>&1 --##CONNECT "/ as sysdba"@${STANDBY_DG_NAME} show configuration; EXIT; EOF check_error $? "Failed to convert database to snapshot standby. Check /tmp/dgmgrl_convert.log for details." log_message "---[ Completed Conversion, Below is the Output ]--- " log_message "cat /tmp/dgmgrl_convert.log" #cat /tmp/dgmgrl_convert.log | tee -a ${LOG_FILE} } # Verify conversion SNAPSHOT_STATUS=$(sqlplus -s / as sysdba <<EOF SET HEADING OFF Timing off FEEDBACK OFF; SELECT DATABASE_ROLE FROM V$DATABASE; EXIT; EOF ) SNAPSHOT_STATUS=$(echo "${SNAPSHOT_STATUS}" | tr -d '[:space:]') if [ "${SNAPSHOT_STATUS}" != "SNAPSHOT_STANDBY" ]; then log_message "Error: Conversion failed. Database role is ${SNAPSHOT_STATUS}, expected SNAPSHOT_STANDBY" send_email "Conversion failed: Database role is ${SNAPSHOT_STATUS}, expected SNAPSHOT_STANDBY" exit 1 fi log_message "Conversion successful: Database is now a snapshot standby." # Verify read-write access sqlplus -s / as sysdba < /tmp/test_rw.log 2>&1 CREATE TABLE test_snapshot (id NUMBER); INSERT INTO test_snapshot VALUES (1); COMMIT; DROP TABLE test_snapshot; EXIT; EOF check_error $? "Failed to verify read-write access on snapshot standby. Check /tmp/test_rw.log for details." log_message "Check passed: Snapshot standby is read-write." # Send success email send_email "Successfully converted ${STANDBY_DG_NAME} to snapshot standby." log_message "Snapshot standby conversion completed successfully."

# Main script logic

main() { case "$1" in to_snapshot) convert_to_snapshot ;; to_physical) revert_to_physical ;; *) log "Usage: $0 [to_snapshot|to_physical]" exit 1 ;; esac } main $1 cat "${LOG_FILE}" exit 0 ### -- Revert Back NOT working yet

Monday, November 4, 2024

pre-post check script

 #######################################################
#!/bin/ksh
# set the DB env and run
# Script to check PRE & Post checks Manually
#
# CB - <Date>   - version 1 - Initial Draft
#set -x

echo -e "\n*************************************"
echo  "DATE = `date`"
echo  "HOSTNAME = " `hostname`
echo -e "*************************************\n"

if [ "$1" = "" ]; then
  echo -e "\n please Provide pre or post "
  echo -e "Syntax: $0 <pre>|<post> \n"
  exit 1
elif [[ "${chk_type}" == "pre" || "${chk_type}" == "post" ]]; then
  echo -e "\n Entered ${chk_type}   ..........  \n"
else
  echo -e "\n Please enter either " PRE " or " POST " Only. \n"
  exit 1
fi

chk_type=`echo $1 | tr "[:upper:]" "[:lower:]"`
export logfile=/home/oracle/CB/${chk_type}_shell_logfile.log
export home=<ORACLE_HOME>
export oem_agent_home=<Agent_home>
export psu_location=<Patch_path>/<patch#>

#exit 1
precheck(){
echo -e "\n   -----[ Gather OS Info ]------\n"
date
uname -a
cat /etc/oratab
ps -ef |grep pmon
ps -ef |grep tns
crsctl stat res -t
df -h
echo -e "\n   -----[ OEM Agent Info ]------\n"
${oem_agent_home}/bin/emctl status agent
${oem_agent_home}/bin/emctl status blackout <name>
}


home_check () {
echo -e "\n     -----[ Gather PATCH Info ]------\n"
$home/OPatch/opatch version
$home/OPatch/opatch lsinventory
 if [[ "$chk_type" == "pre" ]]; then
  echo -e "\n Checking patch Conflicts ........."
  cd ${psu_location}
  ${home}/OPatch/opatch prereq CheckConflictAgainstOHWithDetail -ph ./
 fi
}

db_check () {
echo -e "\n   -----[ Gather DB Info ]------\n"
crsctl stat res -t | grep ".db$" |tr "." " " | awk -F" " '{print $2}'| while read LINE2
do
 # case $LINE2 in
 #   *)
echo -e "\n Running for : $LINE2  "
srvctl status database -d $LINE2 -v
srvctl status service -d $LINE2 -v
echo "-------------------------"
#esac
done
}

home_check | tee -a ${logfile}
precheck | tee ${logfile}
db_check  | tee -a ${logfile}

sql_info (){
cat /etc/oratab | grep 112|tr ":" " "  | awk -F" " '{print $1}' |egrep -v ''\#'|ASM' | while read LINE3
do
export ORACLE_SID=${LINE3}
export ORACLE_HOME=${home}
export LD_LIBRARY_PATH=$ORACLE_HOME/bin:$PATH
#echo "$LINE3"
echo -e "\n    --------[ Running for database : $LINE3 ]---------"
$ORACLE_HOME/bin/sqlplus -s / as sysdba << eof
set pages 999 lines 280 feedback off
col COMMENTS for a33
col pdb_name format a15
col dp_action format a15
col dp_status format a20
col dp_action_time format a20
col dp_description format a30
select comments, action, to_char(action_time,'DD/MM/RR HH24:MI:SS') action_date, version  from sys.registry\$history  order by action_date;
 select name,action_time dp_action_time,id dp_patch_id,action dp_action,comments dp_description from v\$database,sys.registry\$history  where action_time=(select max(action_time)   from sys.registry\$history);
eof
done
}

sql_info

cat  ${logfile} | mailx -s "`hostname` : precheck info " Your_email

 

Monday, July 22, 2024

Postgres DB Fundamentals

 

vacuum --> Defrag in Oracle

 

In postgres Tables gets Bloated due to DML's

A tuple is an internal representation of a row


 

Vacuum

Vacuum FULL

Free up dead rows for reuse,

Selects and DMLs Allowed

No Exclusive Locks,

OS Space NOT released

Rewrite the table with no dead rows (Tuple),

No DML’s Including select Allowed,

Puts EXCLUSIVE Lock,

OS Space will be released

Tune AutoVacuum:

set Vacuum_cost_page IO limit parameters
No. of Workers (I/O Intensive)
 

Autovacuum_vacuum_threshold :

min number of updates/deletes tuples needed to trigger a VACUUM in any table. The default is 50 tuples.

 

Autovacuum_vacuum_scale_factor:

Fraction of the table size (in terms of no. of rows) to decide whether vacuum should be triggered. Default is 0.2

 

Transaction wraparound

·        TXID's can go up till ~4 billion (32 bit ID)

·        Postgres will stop accepting commands when there are fewer than one million transactions left before the maximum XID value is reached.

 

   Why Happens this ?

·        Auto-vacuum is set to turned off

·        Heavy DML Operation

·        Many session or connection’s holding lock’s for very long time

 

   What happens in this Situation

·        PostgreSQL will stop accepting DML statements and switches to READONLY mode.

 

   FIX:

·        Stop and Bring up DB into single user mode

·        Run Vacumm FULL on entire DB (vacuumdb --all)

·        Once done stop ad restart postgres normally


Monday, April 22, 2024

OEM Modifying-em-metric-threshold sizeOfOSAuditFiles

 

In order to modify “sizeOfOSAuditFiles” metric for all the targets in OEM when you have several of them use below script to do all in one shot

Check the current values/setting for your targets 

select  * from sysman.MGMT$METRIC_COLLECTION A
where A.METRIC_NAME='sizeOfOSAuditFiles'
and WARNING_THRESHOLD <'2000';
--and target_name='<target_name>
 
Generate emcli command using below
 
select
'emcli modify_threshold -target_name="'||A.target_name||'" -target_type="oracle_database" -metric="sizeOfOSAuditFiles" -column="FILE_SIZE" -warning_threshold="2000" -critical_threshold="5000" -force'
from sysman.MGMT$METRIC_COLLECTION A
where A.METRIC_NAME='sizeOfOSAuditFiles'
and WARNING_THRESHOLD <='2000';
 
Place the above emcli commands in shell script and run from OMS repo server

$ emcli login -username=sysman 

vi modify_audit.sh
 
[oracle@omshost CB]$ ll
total 20
-rw-r-----. 1 oracle dba 18471 Apr 22 08:42 modify_audit.sh
 
[oracle@ omshost CB]$ chmod 755 modify_audit.sh
total 20
-rwxr-xr-x. 1 oracle dba 18471 Apr 22 08:42 modify_audit.sh
 
[oracle@ omshost CB]$ sh modify_audit.sh
 

 



 and if you want to disable this metric itself then follow below oracle Doc, thanks 

 

Saturday, March 23, 2024

Postgres Backup/Restore

 

pg_dump    à Creates a backup of ONE database at a time

pg_dumpall  à Can back up ALL of your databases simultaneously,

 

pg_dump -U username -W -F t database_name > c:\backup_file.tar

 
-F : specifies the output file format that can be one of the following:
    ·        c: custom-format archive file format
    ·        d: directory-format archive
    ·        t: tar
    ·        p: plain-text SQL script file (Default)
-h Specify database server host
 
-p   Specify database server port
-U  Specify the user which is used to connect to the PostgreSQL database server
-W Used to prompt for a password before connecting to the PostgreSQL server
-d   Specify the database to dump
 
 
 

SCHEMA backup
 

pg_dump --username=user --password --schema-only [schema_name] > database_schema.sql

pg_dump --schema-only DATABASE > schema.sql

 
 
EX:
pg_dump --clean --create --file /tmp/DB-$(date +%Y%m%d).pgdump --format=custom --no-unlogged-table-data  <dbname>
 
 
psql --dbname=DBNAME --command="select pg_start_backup('CurBuild');"
 

Restore

To import a single database testdb from the tar dumpfile

pg_restore -c -U username -W -F t -d testdb dump.tar

 

To import ALL databases from tar dumpfile

pg_restore -c -U username -W -F t dump.tar


To import 1 database from .sql backup

postgres=# create database

psql   -d <New_DBNAME>    < dump.sql 

Validate :

-bash-4.2$ psql

psql (15.4)

Type "help" for help.

postgres=#  \l

postgres=# \c   <new_dbname>

postgres=#  SELECT pg_size_pretty( pg_database_size(‘NEW_DBNAME’));

 




Auto Scroll Stop Scroll