Wednesday, 29 January 2025

Steps to enable user login auditing

 Steps to enable user login auditing:

Set the audit trail to "DB" (database level). You can do this in sqlplus or through any SQL tool you use to connect to the database.

ALTER SYSTEM SET audit_trail = 'DB' SCOPE = SPFILE;

This ensures that the audit trail is written to the database and will be available for querying later. 
You'll need to restart the database after running this command for the change to take effect.


2. Audit only user login activities: Use the following command to enable auditing specifically for login events:

AUDIT SESSION;


This will audit all login attempts (successful and failed) for all users. If you want to restrict auditing to only specific users, you can specify a user as follows:

AUDIT SESSION BY scott;

audit login attempts by a particular user or a set of users:


AUDIT SESSION BY user1, user2;


If you modified the audit_trail parameter, restart the database for the changes to take effect:

SHUTDOWN IMMEDIATE;
STARTUP;


Verifying the Audit Setup:

After setting the audit configuration, you can check the audit settings and verify if auditing is enabled by querying the DBA_AUDIT_TRAIL view:


SELECT * FROM DBA_AUDIT_TRAIL WHERE ACTION_NAME = 'LOGON';


This will show you the login-related audit entries.


Tuesday, 28 January 2025

rman rac duplicate with catalog and archive



1) Set below parameters in pfile and start the database in nomount stage.

*.db_name='HRPgnet2';
*.db_unique_name='HRPgnet2'
*.control_files='+HRPGNET2_DATA','+HRPGNET2_ARCH'
*.db_file_name_convert='+HRPGNEP2_DATA','+HRPGNET2_DATA'
*.log_file_name_convert='+HRPGNEP2_DATA','+HRPGNET2_DATA' 

SQL> startup nomount pfile='/HRPgnet2/oracle/initHRPgnet2.ora'
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size                  2235208 bytes
Variable Size             507511992 bytes
Database Buffers          545259520 bytes
Redo Buffers               13930496 bytes
SQL> select open_mode from v$database;
select open_mode from v$database
                      *
ERROR at line 1:
ORA-01507: database not mounted

SQL>


2) Restore production controlfile

RMAN> connect target /;

connected to target database: HRPGNEP2 (not mounted)

RMAN> restore controlfile from '/HRPgnet2/oracle/11.2.0/dbs/snapcf_HRPgnep21.f';

Starting restore at 26-NOV-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1777 device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=+HRPGNET2_DATA/HRPgnep2/controlfile/current.297.832504631
output file name=+HRPGNET2_ARCH/HRPgnep2/controlfile/current.282.832504631
Finished restore at 26-NOV-13

RMAN> 


3) Catalog the backup pieces

rman
connect target / 
CATALOG START WITH '/backup/rman_1060/HRPgnep2/db_bkp_gnep2' NOPROMPT; 
run 
{
ALLOCATE CHANNEL ch00 TYPE DISK;
ALLOCATE CHANNEL ch01 TYPE DISK;
ALLOCATE CHANNEL ch02 TYPE DISK;
ALLOCATE CHANNEL ch03 TYPE DISK;
ALLOCATE CHANNEL ch04 TYPE DISK;
ALLOCATE CHANNEL ch05 TYPE DISK;
ALLOCATE CHANNEL ch06 TYPE DISK;
ALLOCATE CHANNEL ch07 TYPE DISK;
ALLOCATE CHANNEL ch08 TYPE DISK;
ALLOCATE CHANNEL ch09 TYPE DISK;
ALLOCATE CHANNEL ch10 TYPE DISK;
ALLOCATE CHANNEL ch11 TYPE DISK;
set newname for datafile 1 to '+HRPGNET2_DATA';
set newname for datafile 2 to '+HRPGNET2_DATA';
set newname for datafile 3 to '+HRPGNET2_DATA';
set newname for datafile 4 to '+HRPGNET2_DATA';
set newname for datafile 5 to '+HRPGNET2_DATA';
set newname for datafile 6 to '+HRPGNET2_DATA';
set newname for datafile 7 to '+HRPGNET2_DATA';
set newname for datafile 8 to '+HRPGNET2_DATA';
set newname for datafile 9 to '+HRPGNET2_DATA';
set newname for datafile 10 to '+HRPGNET2_DATA';
set newname for datafile 11 to '+HRPGNET2_DATA';
set newname for datafile 12 to '+HRPGNET2_DATA';
set newname for datafile 13 to '+HRPGNET2_DATA';
set newname for datafile 14 to '+HRPGNET2_DATA';
set newname for datafile 15 to '+HRPGNET2_DATA';
set newname for datafile 16 to '+HRPGNET2_DATA';
set newname for datafile 17 to '+HRPGNET2_DATA';
set newname for datafile 18 to '+HRPGNET2_DATA';
set newname for datafile 19 to '+HRPGNET2_DATA';
set newname for datafile 288 to '+HRPGNET2_DATA';
set newname for datafile 295 to '+HRPGNET2_DATA';
set newname for datafile 314 to '+HRPGNET2_DATA';
set newname for datafile 351 to '+HRPGNET2_DATA';
set newname for datafile 352 to '+HRPGNET2_DATA';
set newname for datafile 353 to '+HRPGNET2_DATA';
set newname for datafile 354 to '+HRPGNET2_DATA';
set newname for datafile 379 to '+HRPGNET2_DATA';
set newname for datafile 392 to '+HRPGNET2_DATA';
set newname for datafile 393 to '+HRPGNET2_DATA';
set newname for datafile 394 to '+HRPGNET2_DATA';
set newname for datafile 395 to '+HRPGNET2_DATA';
set newname for datafile 396 to '+HRPGNET2_DATA';
set newname for datafile 397 to '+HRPGNET2_DATA';
set newname for datafile 398 to '+HRPGNET2_DATA';
set newname for datafile 399 to '+HRPGNET2_DATA';
set newname for datafile 400 to '+HRPGNET2_DATA';
set newname for datafile 401 to '+HRPGNET2_DATA';
set newname for datafile 402 to '+HRPGNET2_DATA';
set newname for datafile 403 to '+HRPGNET2_DATA';
set newname for datafile 404 to '+HRPGNET2_DATA';
set newname for datafile 405 to '+HRPGNET2_DATA';
set newname for datafile 406 to '+HRPGNET2_DATA';
set newname for datafile 407 to '+HRPGNET2_DATA';
restore database;
switch datafile all;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
RELEASE CHANNEL ch06;
RELEASE CHANNEL ch07;
RELEASE CHANNEL ch08;
RELEASE CHANNEL ch09;
RELEASE CHANNEL ch10;
RELEASE CHANNEL ch11;
}

Once the restore is completed , please recover the database using recover database; command and open the database with resetlogs.


RMAN Backup script

 rman_backup.sh


. /u02/PROD_gtebsdb1p.env

CDATE=`date +%d-%m-%Y-%H-%M`

export CDATE;

mkdir -p /backups/rman/rman_backup/$CDATE

export LOGFILE=/backups/rman/rman_backup/$CDATE/rman_backup_log_${CDATE}.log

touch $LOGFILE

rman target / << EOF | tee -i $LOGFILE


run {

sql "alter system archive log current";

allocate channel t1 type disk;

allocate channel t2 type disk;

allocate channel t3 type disk;

allocate channel t4 type disk;

backup format='/backups/rman/rman_backup/$CDATE/Tp_cntl_%Y%M%D_%d_%s_%p_1.ctl' current controlfile;

backup as compressed backupset format='/backups/rman/rman_backup/$CDATE/%U' database;

backup format='/backups/rman/rman_backup/$CDATE/%d_%s_%p_%c_%t.arc' archivelog FROM TIME 'SYSDATE-2' UNTIL TIME 'SYSDATE';

backup format='/backups/rman/rman_backup/$CDATE/Tp_cntl_%Y%M%D_%d_%s_%p_2.ctl' current controlfile;

release channel t1;

release channel t2;

release channel t3;

release channel t4;

}

exit

Recovering database till PITR.

rman target / log=/u08/PBIDW/RMAN/SBIDW_BACKUP/rman_recover.log << EOF
run
{
ALLOCATE CHANNEL c1 DEVICE TYPE DISK;
ALLOCATE CHANNEL c2 DEVICE TYPE DISK;
ALLOCATE CHANNEL c3 DEVICE TYPE DISK;
SET ARCHIVELOG DESTINATION to '/u08/PBIDW/RMAN/SBIDW_BACKUP';
RECOVER DATABASE UNTIL TIME "TO_DATE('2015-06-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')";
release channel c1;
release channel c2;
release channel c3;

RMAN DUPLICATE / CLONE RAC DATABASE


RMAN RAC Database backup pieces are available in below location.

RMAN Backup ==> /backup/rman_1060/proddb/db_bkp_prod


Cloning the target database using RMAN backup.


rman
connect auxiliary /
run 
{
ALLOCATE auxiliary CHANNEL ch01 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch02 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch03 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch04 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch05 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch06 TYPE DISK;
DUPLICATE DATABASE TO DEVDB BACKUP LOCATION '/backup/rman_1060/proddb/db_bkp_prod' NOFILENAMECHECK;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
RELEASE CHANNEL ch06;
}

Getting archives from our tapes to backup locations using sequence number

rman target /

RMAN > connect catalog rmanrail/rman007@atlrxxp1

connected to recovery catalog database

RMAN >run
{

set archivelog destination to '/proddb/dbdata/data1/arch20jun';
restore archivelog from logseq=81066 until logseq=81072;

}

Create standby from backup base location

Run below command in standby database.



rman auxiliary / |tee restore.new.log
run {
ALLOCATE auxiliary CHANNEL ch01 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch02 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch03 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch04 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch05 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch06 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch07 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch08 TYPE DISK;
duplicate target database for standby backup location '/t02/RMAN_BACKUP' NOFILENAMECHECK;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
RELEASE CHANNEL ch06;
RELEASE CHANNEL ch07;
RELEASE CHANNEL ch08;
}

rman duplicate command

To clone target database from source.

Run below command in target database.


rman log /u01/oracle/recovery.log

connect target sys/Nex#us1as3p@PRODDB
 
connect catalog rman/rman007@atlrmxx1
 
connect auxiliary /

run {
ALLOCATE auxiliary CHANNEL ch00 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch01 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch02 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch03 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch04 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch05 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch06 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch07 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch08 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch09 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch10 TYPE DISK;
set until time '12-NOV-2012 15:17:00';
duplicate target database to TESTDB;    
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
RELEASE CHANNEL ch06;
RELEASE CHANNEL ch07;
RELEASE CHANNEL ch08;
RELEASE CHANNEL ch09;
RELEASE CHANNEL ch10;
}

rman rac compressed database backup along with controlfile and archives .

 rman log=/backup/PRODDB/30Dec10_zp1/RMAN_FullBkp_prod.log
connect target /
run {
ALLOCATE CHANNEL ch02 TYPE DISK;
ALLOCATE CHANNEL ch03 TYPE DISK;
ALLOCATE CHANNEL ch04 TYPE DISK;
ALLOCATE CHANNEL ch05 TYPE DISK;
ALLOCATE CHANNEL ch06 TYPE DISK;
ALLOCATE CHANNEL ch07 TYPE DISK;
ALLOCATE CHANNEL ch08 TYPE DISK;
ALLOCATE CHANNEL ch09 TYPE DISK;
ALLOCATE CHANNEL ch10 TYPE DISK;
ALLOCATE CHANNEL ch11 TYPE DISK;
BACKUP AS COMPRESSED BACKUPSET FORMAT '/backup/PRODDB/30Dec10_zp1/%d_db_u%u_s%s_p%p_t%t_db' FILESPERSET 10 database plus archivelog ;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
RELEASE CHANNEL ch06;
RELEASE CHANNEL ch07;
RELEASE CHANNEL ch08;
RELEASE CHANNEL ch09;
RELEASE CHANNEL ch10;
RELEASE CHANNEL ch11;
}
sql 'alter system archive log current';
BACKUP FORMAT '/backup/PRODDB/30Dec10_zp1/Control_bk_u%u_s%s_p%p_t%t_bk' TAG 'PRODDB_ctrl' CURRENT CONTROLFILE;
exit;

Take RMAN backup on desire location

rman log=/backup/PRODDB/db_backup_PRODDB/hot_bkp_prod.log
connect target /
run {
ALLOCATE CHANNEL ch02 TYPE DISK;
ALLOCATE CHANNEL ch03 TYPE DISK;
ALLOCATE CHANNEL ch04 TYPE DISK;
ALLOCATE CHANNEL ch05 TYPE DISK;
ALLOCATE CHANNEL ch06 TYPE DISK;
ALLOCATE CHANNEL ch07 TYPE DISK;
ALLOCATE CHANNEL ch08 TYPE DISK;
ALLOCATE CHANNEL ch09 TYPE DISK;
ALLOCATE CHANNEL ch10 TYPE DISK;
ALLOCATE CHANNEL ch11 TYPE DISK;
BACKUP FORMAT '/backup/PRODDB/db_backup_PRODDB/%d_db_u%u_s%s_p%p_t%t_db' FILESPERSET 10 database plus archivelog;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
RELEASE CHANNEL ch06;
RELEASE CHANNEL ch07;
RELEASE CHANNEL ch08;
RELEASE CHANNEL ch09;
RELEASE CHANNEL ch10;
RELEASE CHANNEL ch11;
}
sql 'alter system archive log current';
BACKUP FORMAT '/backup/PRODDB/db_backup_PRODDB/Control_bk_u%u_s%s_p%p_t%t_bk' TAG 'PRODDB_ctrl' CURRENT CONTROLFILE;
exit;

Monday, 27 January 2025

Script to take backup of schemas and objects

#!/bin/bash

# Ask for Oracle environment variables
read -p "Enter your ORACLE_HOME path: " ORACLE_HOME
read -p "Enter your ORACLE_SID: " ORACLE_SID

# Set Oracle environment variables
export ORACLE_HOME
export ORACLE_SID
export PATH=$ORACLE_HOME/bin:$PATH

# Get the current date format for naming backups
DATEFORMAT=$(date +"%Y%m%d_%H%M%S")

# Show menu options
echo "Select the type of backup:"
echo "1) Single schema backup"
echo "2) Multiple schema backup (comma separated)"
echo "3) Single table backup"
echo "4) Multiple table backup (comma separated)"

# Read user input
read -p "Enter your choice (1/2/3/4): " choice

case $choice in
  1)
    # Single schema backup
    read -p "Enter the schema name: " schema_name
    echo "Starting backup for schema $schema_name at $(date)" >> nohup.out

    # Run expdp command for single schema
    expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${schema_name}_$DATEFORMAT_%U.dmp logfile=expdp_${schema_name}_$DATEFORMAT.log schemas=$schema_name parallel=4 cluster=n >> nohup.out 2>&1
    ;;
  
  2)
    # Multiple schema backup
    read -p "Enter the schema names (comma separated): " schema_names
    IFS=',' read -r -a SCHEMA_ARRAY <<< "$schema_names"

    for schema in "${SCHEMA_ARRAY[@]}"; do
      echo "Starting backup for schema $schema at $(date)" >> nohup.out

      # Run expdp command for each schema
      expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${schema}_$DATEFORMAT_%U.dmp logfile=expdp_${schema}_$DATEFORMAT.log schemas=$schema parallel=4 cluster=n >> nohup.out 2>&1
    done
    ;;
  
  3)
    # Single table backup
    read -p "Enter the table name (schema.tablename): " table_name
    echo "Starting backup for table $table_name at $(date)" >> nohup.out

    # Run expdp command for single table
    expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${table_name}_$DATEFORMAT_%U.dmp logfile=expdp_${table_name}_$DATEFORMAT.log tables=$table_name parallel=4 cluster=n >> nohup.out 2>&1
    ;;
  
  4)
    # Multiple table backup
    read -p "Enter the table names (schema.tablename1,schema.tablename2,...): " table_names
    IFS=',' read -r -a TABLE_ARRAY <<< "$table_names"

    for table in "${TABLE_ARRAY[@]}"; do
      echo "Starting backup for table $table at $(date)" >> nohup.out

      # Run expdp command for each table
      expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${table}_$DATEFORMAT_%U.dmp logfile=expdp_${table}_$DATEFORMAT.log tables=$table parallel=4 cluster=n >> nohup.out 2>&1
    done
    ;;
  
  *)
    echo "Invalid choice. Please run the script again and select a valid option."
    exit 1
    ;;
esac

echo "Backup process completed at $(date)." >> nohup.out

Friday, 24 January 2025

Script to take backup of multiple schemas.

#!/bin/bash

# Prompt for necessary inputs
read -p "Enter schemas to backup (comma-separated): " schema_name
read -p "Enter ORACLE_SID: " ORACLE_SID
read -p "Enter ORACLE_HOME path: " ORACLE_HOME

# Input for transferring files to the target server
read -p "Enter target server IP: " DEST_IP
read -p "Enter target directory path: " DEST_PATH

# Date format for filenames
DATEFORMAT=$(date +%Y%m%d%H%M%S)

# Loop through schemas and perform export
IFS=',' read -r -a SCHEMA_ARRAY <<< "$schema_name"
for schema in "${SCHEMA_ARRAY[@]}"; do
  echo "Starting backup for schema $schema at $(date)" >> nohup.out

  # Run expdp command with %U option for multiple dump files
  expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${schema}_$DATEFORMAT_%U.dmp logfile=expdp_${schema}_$DATEFORMAT.log schemas=$schema parallel=4 cluster=n >> nohup.out 2>&1

  # Check for successful export
  if [ $? -eq 0 ]; then
    echo "Export for $schema completed." >> nohup.out
    # Transfer dump and log files
    scp OBACKUP/expdp_${schema}_$DATEFORMAT_* oracle@$DEST_IP:$DEST_PATH >> nohup.out 2>&1
    if [ $? -eq 0 ]; then
      echo "Files for $schema transferred successfully." >> nohup.out
    else
      echo "File transfer failed for $schema." >> nohup.out
    fi
  else
    echo "Export failed for $schema." >> nohup.out
  fi
done

echo "Backup script completed at $(date)" >> nohup.out

Tuesday, 21 January 2025

Find old database backup objects and delete them to get memory back.

 -- Query to find backup objects (tables with '_bak' or '_backup' in their names)

SELECT owner, table_name 

FROM dba_tables

WHERE table_name LIKE '%_bak%' OR table_name LIKE '%_backup%';



-- Query to find backup views

SELECT owner, view_name 

FROM dba_views

WHERE view_name LIKE '%_bak%' OR view_name LIKE '%_backup%';

Database Health Check Script

-- DBA Health Check Script for Oracle Database

-- Step 1: Check Database Instance and Version
-- Check Oracle Database version and instance status
SELECT * FROM v$version;
SELECT instance_name, status FROM v$instance;

-- Step 2: Check Database Status
-- Ensure that the database is open and running
SELECT status FROM v$database;

-- Step 3: Check Tablespace Usage
-- Check for space usage and growth of tablespaces
SELECT tablespace_name, 
       total_blocks * 8 / 1024 / 1024 AS "Total_MB",
       free_blocks * 8 / 1024 / 1024 AS "Free_MB",
       (total_blocks - free_blocks) * 8 / 1024 / 1024 AS "Used_MB",
       ROUND(((total_blocks - free_blocks) * 100) / total_blocks, 2) AS "Usage(%)"
FROM v$temp_space_header;

SELECT tablespace_name, 
       ROUND(SUM(bytes) / 1024 / 1024, 2) AS "Used_MB",
       ROUND(SUM(maxbytes) / 1024 / 1024, 2) AS "Max_MB",
       ROUND((SUM(bytes) / SUM(maxbytes)) * 100, 2) AS "Used(%)"
FROM dba_data_files
GROUP BY tablespace_name;

-- Step 4: Check for Expired or Invalid Objects
-- It's important to check for invalid objects and make sure everything is valid
SELECT owner, object_type, object_name
FROM dba_objects
WHERE status = 'INVALID';

-- Step 5: Check for Locked Sessions
-- This will show if any sessions are being locked, which could affect performance
SELECT sid, serial#, username, osuser, status, machine, program
FROM v$session
WHERE blocking_session IS NOT NULL;

-- Step 6: Check for Long Running Queries
-- Monitor long running queries which could be resource hogs
SELECT s.sid, s.serial#, s.username, q.sql_text, s.status, s.osuser, s.program
FROM v$session s, v$sql q
WHERE s.sql_id = q.sql_id
AND s.status = 'ACTIVE'
AND s.username IS NOT NULL
ORDER BY s.sid;

-- Step 7: Check Redo Log Status
-- It's important to monitor redo logs for any potential issues
SELECT member, status FROM v$logfile;

-- Step 8: Check Oracle Alert Log for Errors
-- Check the alert log for recent errors and important messages
-- Check the alert log using the following:
-- $ tail -n 100 $ORACLE_BASE/diag/rdbms/<db_name>/<instance_name>/alert_<instance_name>.log

-- Step 9: Check Backup Status
-- Ensure that backups are scheduled and have been successfully completed
SELECT * FROM dba_backup_set WHERE status = 'COMPLETED' ORDER BY completion_time DESC FETCH FIRST 5 ROWS ONLY;

-- Step 10: Check the Listener Status
-- Ensure that the listener is up and running
SELECT * FROM dba_listener_status;

-- Step 11: Check for Database Jobs and their Status
-- You should check for any long-running or failed jobs in Oracle DBMS_SCHEDULER
SELECT job_name, status, last_start_date, next_run_date
FROM dba_scheduler_jobs;

-- Step 12: Check for Tables with Excessive Rows (Potentially Large Tables)
-- Identify tables with very large numbers of rows that may need partitioning or optimization
SELECT table_name, num_rows
FROM all_tables
WHERE owner NOT IN ('SYS', 'SYSTEM') 
AND num_rows > 1000000  -- Adjust the threshold based on your environment
ORDER BY num_rows DESC;

-- Step 13: Check for Fragmentation in Tablespaces
-- Check for tablespaces with high fragmentation or those requiring cleanup
SELECT tablespace_name, 
       ROUND(SUM(bytes) / 1024 / 1024, 2) AS "Free_MB",
       ROUND(SUM(maxbytes) / 1024 / 1024, 2) AS "Max_MB",
       ROUND((SUM(bytes) / SUM(maxbytes)) * 100, 2) AS "Used(%)"
FROM dba_free_space
GROUP BY tablespace_name;

-- Step 14: Check System Resource Usage
-- Check CPU, memory, and IO usage on the server to ensure no system resource issues
-- This would need to be done on the OS level, not directly through SQL.
-- For Linux:
-- $ top -n 1
-- $ vmstat 1 5
-- $ iostat

-- Step 15: Check for Datafile Growth
-- Check datafiles for growth and ensure there's enough space
SELECT file_name, ROUND(bytes / 1024 / 1024 / 1024, 2) AS "Size_GB"
FROM dba_data_files;

-- Step 16: Check for Pending Table/Index/Partition Stats
-- Ensure that statistics are up-to-date to improve query performance
SELECT table_name, num_rows, last_analyzed FROM dba_tables WHERE owner = 'SCHEMA_NAME' AND (last_analyzed IS NULL OR num_rows = 0);

-- Step 17: Check Database Users and Profiles
-- Monitor active database users and their profiles
SELECT username, profile FROM dba_users;

-- Step 18: Check for High Watermark for Undo Segments
-- Check undo segments and high-water mark in undo tablespace
SELECT tablespace_name, file_id, block_id, blocks
FROM v$rollstat;

-- Step 19: **Monitor Index Health**
-- Check for missing indexes on frequently queried tables (tables with no indexes at all)
SELECT owner, table_name, COUNT(*) AS missing_indexes
FROM dba_tables t
LEFT JOIN dba_indexes i ON t.table_name = i.table_name AND t.owner = i.owner
WHERE i.index_name IS NULL
AND t.owner NOT IN ('SYS', 'SYSTEM')
GROUP BY owner, table_name
ORDER BY missing_indexes DESC;

-- Step 20: **Monitor Fragmented Indexes**
-- Identify fragmented indexes which may need rebuilding or optimization
SELECT index_name, table_name, 
       ROUND((leaf_blocks * 100) / blocks) AS "Fragmentation(%)"
FROM dba_indexes
WHERE table_owner NOT IN ('SYS', 'SYSTEM')
AND ROUND((leaf_blocks * 100) / blocks) > 30  -- Adjust threshold for fragmentation
ORDER BY "Fragmentation(%)" DESC;

-- Step 21: **Monitor Index Usage**
-- Identify indexes that are not being used, potentially candidates for removal
SELECT * 
FROM dba_indexes
WHERE status = 'UNUSED';

-- Step 22: **Gather Database Statistics (GSS)**
-- Regularly gathering statistics helps Oracle optimize query plans and overall performance.
-- This section will gather stats for tables and indexes. Make sure the relevant schema is specified for better control.
-- Use DBMS_STATS to gather stats for all schemas or specific objects:
BEGIN
  -- Gather stats for all schemas in the database
  DBMS_STATS.GATHER_DATABASE_STATS;
  -- Alternatively, gather stats for a specific schema (replace SCHEMA_NAME)
  -- DBMS_STATS.GATHER_SCHEMA_STATS('SCHEMA_NAME');
END;
/

-- Step 23: **Index Usage Stats:**
-- Monitor Index usage statistics to identify unused or underperforming indexes
SELECT index_name, 
       object_name, 
       object_type, 
       leaf_blocks, 
       distinct_keys, 
       avg_leaf_blocks_per_key, 
       num_rows
FROM dba_indexes
WHERE table_owner NOT IN ('SYS', 'SYSTEM')
AND (distinct_keys = 0 OR avg_leaf_blocks_per_key > 10);  -- Adjust based on your system needs

-- Step 24: **Monitor Disabled Triggers**
-- Check if there are any triggers that are disabled on any tables. Disabled triggers could indicate issues in the application or unwanted side effects.
SELECT owner, table_name, trigger_name, status
FROM dba_triggers
WHERE status = 'DISABLED'
AND owner NOT IN ('SYS', 'SYSTEM')
ORDER BY owner, table_name;

Monday, 20 January 2025

Transfer file from Linux server to Windows server



 #!/bin/bash

# Windows machine details
WIN_USER="wasiq"              # Your Windows username
WIN_PASS="password123"        # Your Windows password
WIN_HOST="192.168.1.100"      # IP address or hostname of the Windows machine (can also use the Windows machine's hostname)
WIN_REMOTE_DIR="C:/Users/wasiq/Desktop"  # The directory on your Windows machine where the file will be saved

# Linux details
LINUX_FILE="/path/to/linux/file/myfile.tar.gz"  # The file on your Linux server you want to transfer

# Use sshpass to provide the password and run the sftp command
sshpass -p "${WIN_PASS}" sftp -v ${WIN_USER}@${WIN_HOST} << EOT
lcd ${WIN_REMOTE_DIR}      # Local directory on Windows where the file will be saved
get "${LINUX_FILE}"        # Transfer the file from the Linux server to the Windows machine
bye                        # End the sftp session
EOT



Script to transfer file windows to Linux server

vi transfer_file.sh

 #!/bin/bash

# Prompt for file path and server details
read -p "Enter the full path of the file to transfer (e.g., C:/Users/YourUsername/Desktop/myfile.txt): " file_path
read -p "Enter the Linux server username (e.g., oracle): " linux_user
read -p "Enter the Linux server IP address (e.g., 192.168.1.1): " linux_server
read -p "Enter the target directory on the Linux server (e.g., /tmp): " target_dir

# Path to the pscp executable (Make sure you have pscp.exe from the PuTTY installation)
PSCP_PATH="/path/to/pscp.exe"

# Use pscp to transfer the file
"$PSCP_PATH" "$file_path" "$linux_user"@"$linux_server":"$target_dir"

# Check if the transfer was successful
if [ $? -eq 0 ]; then
  echo "File transferred successfully!"
else
  echo "File transfer failed. Please check your details."
fi


bash transfer_file.sh




Transfer_file_with_directories.sh

@echo off
setlocal

REM Set your variables
set SERVER=your_linux_server_address
set USER=your_username
set PASSWORD=your_password
set LOCAL_FOLDER=C:\path\to\your\local\
set REMOTE_DIR=/path/to/remote/directory/

REM Set the path to WinSCP executable (make sure WinSCP is installed and the path is correct)
set WINSCP_PATH="C:\Program Files (x86)\WinSCP\WinSCP.exe"

REM Create a temporary script file for WinSCP commands
echo open sftp://%USER%:%PASSWORD%@%SERVER% > transfer_script.txt
echo option batch abort >> transfer_script.txt
echo option confirm off >> transfer_script.txt

REM Recursive file and directory transfer
echo Synchronizing local folder to remote directory... >> transfer_script.txt
echo synchronize remote "%LOCAL_FOLDER%" "%REMOTE_DIR%" >> transfer_script.txt

REM Execute the transfer using WinSCP
"%WINSCP_PATH%" /script=transfer_script.txt

REM Check if the transfer was successful
if %ERRORLEVEL% EQU 0 (
    echo File transfer completed successfully.
) else (
    echo File transfer failed. Please check the log for errors.
)

REM Cleanup: Remove the temporary script file
del transfer_script.txt

endlocal




Sunday, 19 January 2025

Script to take backup of schema

vi backup_schema.sh

#!/bin/sh
# Prompt for user input for Schema name, Oracle SID, and Oracle Home
read -p "Enter schema name: " schema_name
read -p "Enter ORACLE_SID: " ORACLE_SID
read -p "Enter ORACLE_HOME path: " ORACLE_HOME

# Validate the inputs
if [ -z "$schema_name" ] || [ -z "$ORACLE_SID" ] || [ -z "$ORACLE_HOME" ]; then
  echo "All inputs (schema name, ORACLE_SID, ORACLE_HOME) are required. Exiting..." >> nohup.out
  exit 1
fi

# Set environment variables
export ORACLE_SID
export ORACLE_HOME
export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/lib:/usr/ccs/bin:$ORACLE_HOME/OPatch
DATEFORMAT=`date +%Y%m%d`

# Data Pump export command
echo "Starting backup for schema $schema_name at $(date)" >> nohup.out
expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${schema_name}_`echo $DATEFORMAT`.dmp logfile=expdp_${schema_name}_`echo $DATEFORMAT`.log schemas=$schema_name parallel=8 cluster=n flashback_time=systimestamp >> nohup.out 2>&1

# Check if the expdp command was successful
if [ $? -eq 0 ]; then
  echo "Backup of schema $schema_name completed successfully at $(date)." >> nohup.out
else
  echo "Backup failed for schema $schema_name. Please check the log file for details." >> nohup.out
  exit 1
fi


How to run the script ?

$ ./backup_schema.sh
Enter schema name: HR
Enter ORACLE_SID: orcl
Enter ORACLE_HOME path: /u01/app/oracle/product/12.2.0.1/dbhome_1




Take backup of schema using data pump utility ,,, just you need to pass schema name.

Just input schema name and it will take backup of schema.


vi backup_schema.sh

 #!/bin/sh

# Set environment variables
export ORACLE_SID=testdb1
export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/test-db
export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/lib:/usr/ccs/bin:$ORACLE_HOME/OPatch
DATEFORMAT=`date +%Y%m%d`

# Check if schema name is passed as an argument
if [ -z "$1" ]; then
  echo "Schema name is required. Exiting..." >> nohup.out
  exit 1
fi

# Assign the first argument as the schema name
schema_name=$1

# Data Pump export command
echo "Starting backup for schema $schema_name at $(date)" >> nohup.out
expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${schema_name}_`echo $DATEFORMAT`.dmp logfile=expdp_${schema_name}_`echo $DATEFORMAT`.log schemas=$schema_name parallel=8 cluster=n flashback_time=systimestamp >> nohup.out 2>&1

# Check if the expdp command was successful
if [ $? -eq 0 ]; then
  echo "Backup of schema $schema_name completed successfully at $(date)." >> nohup.out
else
  echo "Backup failed for schema $schema_name. Please check the log file for details." >> nohup.out
  exit 1
fi


Use below command to run the script ...

nohup ./backup_schema.sh schemaname &

Query to display Non Default Oracle Database users

 SELECT username, account_status, profile FROM dba_users WHERE username NOT IN ( 'SYS', 'SYSTEM', 'OUTLN', 'XDB', 'DBSNMP', 'APPQOSSYS', 'APEX_040000', 'ORDDATA', 'WMSYS', 'DGSYS', 'CTXSYS', 'MDSYS', 'OLAPSYS', 'ORDDATA', 'SYSMAN', 'RMSYS', 'SPATIAL_CSW_USER', 'SPATIAL_WMS_USER', 'SPATIAL_WFS_USER', 'SPATIAL_WMS_USER', 'XDB', 'MDSYS' ) ORDER BY username;

Saturday, 18 January 2025

Compile invalid objects for a particular schema

 select 'alter '||decode(object_type,'PACKAGE BODY','PACKAGE','UNDEFINED','MATERIALIZED VIEW',object_type)||' '||object_name||' compile;' from dba_objects where status='INVALID' and owner= '&TB';

Last10 execution of the SQL query

set lines 132
col sql_id head "SQL ID"
col exec_time head "Exec Time (min)"
col exec_date head "Exec Date"
col sid head "SID"
col status head "Status"

accept sql_id prompt 'Enter SQL ID: '  -- User Input for SQL ID

select rownum, 
       s.sid, 
       q.sql_id, 
       to_char(q.first_load_time, 'mm/dd/yy hh24:mi:ss') as exec_date, 
       round(q.execution_time/60, 2) as exec_time,  -- Execution time in minutes
       ses.status  -- Status of the session
from (
  select sql_id, 
         first_load_time,
         execution_time  -- Total execution time in seconds
  from v$sql_statistics
  where sql_id = '&sql_id'
  order by first_load_time desc
) q
left join v$session s on s.sql_id = q.sql_id
left join v$session ses on ses.sid = s.sid
where rownum <= 10;  -- Display only the last 10 executions

Script to check locks in the database

 set serveroutput on size 10000

declare

   cursor c1 is

   select * from v$lock where request != 0

   order by id1, id2;

   wid1            number := -999999;

   wid2            number := -999999;

   wholder_detail  varchar2(200);

   v_err_msg       varchar2(80);

   wsid            number(5);

   wstep           number(2);

   wtype           varchar2(10);

   wobject_name    varchar2(180);

   wobject_name1   varchar2(80);

   wlock_type      varchar2(50);

   w_lastcallet    varchar2(11);

   h_lastcallet    varchar2(11);


   -- Dynamically fetch the machine name without hardcoding any prefix

   machine_name    varchar2(100);


   -- Added by Harish, 24-Feb-2010

   cursor c2 (v_id1 v$lock.ID1%TYPE, v_id2 v$lock.ID2%TYPE) is 

   select sid , type from v$lock

   where id1 = v_id1

     and id2 = v_id2

     and request = 0

     and lmode != 4;


begin

   -- Get the machine name dynamically from the current session

   select machine into machine_name from v$session where rownum = 1;


   -- Now the 'machine_name' will contain the dynamic machine name from v$session


   for c1_rec in c1 loop

      if c1_rec.id1 = wid1 and c1_rec.id2 = wid2 then

         null;

      else

         wstep := 10;


         -- Begin Added by ab

         for c2_rec in c2(c1_rec.id1, c1_rec.id2) loop

            wsid := c2_rec.sid;

            wtype := c2_rec.type;


            -- Output holder session details

            dbms_output.put_line('  ');

            wstep := 20;

            select 'Holder DBU: ' || s.username || ' OSU: ' || s.osuser || ' DBP:' || p.spid || ' APP: ' || s.process ||

                   ' SID:' || s.sid || ' Status: ' || s.status ||

                   ' (' || floor(last_call_et/3600) || ':' ||

                   floor(mod(last_call_et, 3600)/60) || ':' ||

                   mod(mod(last_call_et, 3600), 60) ||

                   ') Module:' || module ||

                   ' AppSrvr: ' || substr(s.machine, 1, 15)  -- No prefix replacement, just display the machine

            into wholder_detail

            from v$session s, v$process p

            where s.sid = wsid

              and s.paddr = p.addr;

            dbms_output.put_line(wholder_detail);

            

            -- Lock type and object info

            begin

               select decode(wtype,'TX', 'Transaction',

                              'DL', 'DDL Lock',

                              'MR', 'Media Recovery',

                              'RT', 'Redo Thread',

                              'UN', 'User Name',

                              'TX', 'Transaction',

                              'TM', 'DML',

                              'UL', 'PL/SQL User Lock',

                              'DX', 'Distributed Xaction',

                              'CF', 'Control File',

                              'IS', 'Instance State',

                              'FS', 'File Set',

                              'IR', 'Instance Recovery',

                              'ST', 'Disk Space Transaction',

                              'TS', 'Temp Segment',

                              'IV', 'Library Cache Invalidation',

                              'LS', 'Log Start or Switch',

                              'RW', 'Row Wait',

                              'SQ', 'Sequence Number',

                              'TE', 'Extend Table',

                              'TT', 'Temp Table',

                              'Un-Known Type of Lock')

               into wlock_type

               from dual;

               

               declare

                  cursor c3 is

                  select object_id from v$locked_object

                  where session_id = wsid;

               begin

                  wobject_name := '';

                  for c3_rec in c3 loop

                     select object_type || ': ' || owner || '.' || object_name

                     into wobject_name

                     from dba_objects

                     where object_id = c3_rec.object_id;

                     wobject_name := wobject_name || ' ' || wobject_name1;

                  end loop;

               exception

                  when others then

                     wobject_name := wobject_name || ' No Object Found';

               end;

               dbms_output.put_line('Lock Held: ' || wlock_type || ' for Object :' || wobject_name);

            exception

               when no_data_found then

                  dbms_output.put_line('Lock Held: ' || wlock_type || ' No object found in DBA Objects');

            end;

         end loop;

      end if;

      wstep := 30;


      -- Requestor session details

      select '....   Requestor DBU: ' || s.username || ' OSU: ' || s.osuser || ' DBP:' || p.spid || ' APP: ' || s.process ||

             ' SID:' || s.sid || ' Status: ' || s.status ||

             ' (' || floor(last_call_et/3600) || ':' ||

             floor(mod(last_call_et, 3600)/60) || ':' ||

             mod(mod(last_call_et, 3600), 60) ||

             ') Module:' || module ||

             ' AppSrvr: ' || substr(s.machine, 1, 15)  -- No prefix replacement, just display the machine

      into wholder_detail

      from v$session s, v$process p

      where s.sid = c1_rec.sid

        and s.paddr = p.addr;

      dbms_output.put_line(wholder_detail);


      wid1 := c1_rec.id1;

      wid2 := c1_rec.id2;

   end loop;


   -- No locks held

   if wid1 = -999999 then

      wstep := 40;

      dbms_output.put_line('No one requesting locks held by others');

   end if;


exception

   when others then

      v_err_msg := (sqlerrm || '  ' || sqlcode || ' step=' || to_char(wstep));

      DBMS_OUTPUT.PUT_LINE(v_err_msg);

end;

/


How to disable Auto Task Tunning jobs...

 set lines 180 pages 1000
col client_name for a40
col attributes for a60
select client_name, status,attributes,service_name from dba_autotask_client;
CLIENT_NAME                              STATUS   ATTRIBUTES                                                   SERVICE_NAME
---------------------------------------- -------- ------------------------------------------------------------ ----------------------------------------------------------------
auto optimizer stats collection          DISABLED ON BY DEFAULT, VOLATILE, SAFE TO KILL
auto space advisor                       ENABLED  ON BY DEFAULT, VOLATILE, SAFE TO KILL
sql tuning advisor                       ENABLED  ONCE PER WINDOW, ON BY DEFAULT, VOLATILE, SAFE TO KILL


BEGIN
DBMS_AUTO_TASK_ADMIN.disable(
client_name => 'auto space advisor',
operation   => NULL,
window_name => NULL);
END;

PL/SQL procedure successfully completed.

BEGIN
  DBMS_AUTO_TASK_ADMIN.disable(
    client_name => 'sql tuning advisor',
    operation   => NULL,
    window_name => NULL);
END;  
/

PL/SQL procedure successfully completed.

BEGIN
DBMS_AUTO_TASK_ADMIN.disable(
client_name => 'auto optimizer stats collection',
operation   => NULL,
window_name => NULL);
END;
/

PL/SQL procedure successfully completed.

SQL> select client_name, status,attributes,service_name from dba_autotask_client;
CLIENT_NAME                              STATUS   ATTRIBUTES                                                   SERVICE_NAME
---------------------------------------- -------- ------------------------------------------------------------ ----------------------------------------------------------------
auto optimizer stats collection          DISABLED ON BY DEFAULT, VOLATILE, SAFE TO KILL
auto space advisor                       DISABLED ON BY DEFAULT, VOLATILE, SAFE TO KILL
sql tuning advisor                       DISABLED ONCE PER WINDOW, ON BY DEFAULT, VOLATILE, SAFE TO KILL

SQL>

Script to take backup of multiple schema using shell script

 $crontab -l

00 04 * * * /ora/app/oracle/refresh/expdp_testdb_SIT.sh 1>/dev/null 2>&1


Script

cat /ora/app/oracle/refresh/expdp_testdb_SIT.sh
#!/bin/sh
export ORACLE_SID=testdb1
export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/test-db
export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/lib:/usr/ccs/bin:$ORACLE_HOME/OPatch
DATEFORMAT=`date +%Y%m%d`

# Data Pump export
expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_SIT_`echo $DATEFORMAT`.dmp logfile=expdp_SIT_`echo $DATEFORMAT`.log schemas=ACS,AU,EPG,FP,FRAUD,MAIN,SMSGATE,SSO,SVAB,SVCG,SVCGUP,SVCM,SVIP,SVISTA,SVMP,SVMP_IG,SVRB,SVWEB,SVWFE,WEBGATE parallel=8 cluster=n flashback_time=systimestamp

Monitoring Data pump Jobs

select sid, serial#, sofar, totalwork, dp.owner_name, dp.state, dp.job_mode
from gv$session_longops sl, gv$datapump_job dp
where sl.opname = dp.job_name and sofar != totalwork;