Wednesday, 29 January 2025

Steps to enable user login auditing

 Steps to enable user login auditing:

Set the audit trail to "DB" (database level). You can do this in sqlplus or through any SQL tool you use to connect to the database.

ALTER SYSTEM SET audit_trail = 'DB' SCOPE = SPFILE;

This ensures that the audit trail is written to the database and will be available for querying later. 
You'll need to restart the database after running this command for the change to take effect.


2. Audit only user login activities: Use the following command to enable auditing specifically for login events:

AUDIT SESSION;


This will audit all login attempts (successful and failed) for all users. If you want to restrict auditing to only specific users, you can specify a user as follows:

AUDIT SESSION BY scott;

audit login attempts by a particular user or a set of users:


AUDIT SESSION BY user1, user2;


If you modified the audit_trail parameter, restart the database for the changes to take effect:

SHUTDOWN IMMEDIATE;
STARTUP;


Verifying the Audit Setup:

After setting the audit configuration, you can check the audit settings and verify if auditing is enabled by querying the DBA_AUDIT_TRAIL view:


SELECT * FROM DBA_AUDIT_TRAIL WHERE ACTION_NAME = 'LOGON';


This will show you the login-related audit entries.


Tuesday, 28 January 2025

rman rac duplicate with catalog and archive



1) Set below parameters in pfile and start the database in nomount stage.

*.db_name='HRPgnet2';
*.db_unique_name='HRPgnet2'
*.control_files='+HRPGNET2_DATA','+HRPGNET2_ARCH'
*.db_file_name_convert='+HRPGNEP2_DATA','+HRPGNET2_DATA'
*.log_file_name_convert='+HRPGNEP2_DATA','+HRPGNET2_DATA' 

SQL> startup nomount pfile='/HRPgnet2/oracle/initHRPgnet2.ora'
ORACLE instance started.

Total System Global Area 1068937216 bytes
Fixed Size                  2235208 bytes
Variable Size             507511992 bytes
Database Buffers          545259520 bytes
Redo Buffers               13930496 bytes
SQL> select open_mode from v$database;
select open_mode from v$database
                      *
ERROR at line 1:
ORA-01507: database not mounted

SQL>


2) Restore production controlfile

RMAN> connect target /;

connected to target database: HRPGNEP2 (not mounted)

RMAN> restore controlfile from '/HRPgnet2/oracle/11.2.0/dbs/snapcf_HRPgnep21.f';

Starting restore at 26-NOV-13
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=1777 device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=+HRPGNET2_DATA/HRPgnep2/controlfile/current.297.832504631
output file name=+HRPGNET2_ARCH/HRPgnep2/controlfile/current.282.832504631
Finished restore at 26-NOV-13

RMAN> 


3) Catalog the backup pieces

rman
connect target / 
CATALOG START WITH '/backup/rman_1060/HRPgnep2/db_bkp_gnep2' NOPROMPT; 
run 
{
ALLOCATE CHANNEL ch00 TYPE DISK;
ALLOCATE CHANNEL ch01 TYPE DISK;
ALLOCATE CHANNEL ch02 TYPE DISK;
ALLOCATE CHANNEL ch03 TYPE DISK;
ALLOCATE CHANNEL ch04 TYPE DISK;
ALLOCATE CHANNEL ch05 TYPE DISK;
ALLOCATE CHANNEL ch06 TYPE DISK;
ALLOCATE CHANNEL ch07 TYPE DISK;
ALLOCATE CHANNEL ch08 TYPE DISK;
ALLOCATE CHANNEL ch09 TYPE DISK;
ALLOCATE CHANNEL ch10 TYPE DISK;
ALLOCATE CHANNEL ch11 TYPE DISK;
set newname for datafile 1 to '+HRPGNET2_DATA';
set newname for datafile 2 to '+HRPGNET2_DATA';
set newname for datafile 3 to '+HRPGNET2_DATA';
set newname for datafile 4 to '+HRPGNET2_DATA';
set newname for datafile 5 to '+HRPGNET2_DATA';
set newname for datafile 6 to '+HRPGNET2_DATA';
set newname for datafile 7 to '+HRPGNET2_DATA';
set newname for datafile 8 to '+HRPGNET2_DATA';
set newname for datafile 9 to '+HRPGNET2_DATA';
set newname for datafile 10 to '+HRPGNET2_DATA';
set newname for datafile 11 to '+HRPGNET2_DATA';
set newname for datafile 12 to '+HRPGNET2_DATA';
set newname for datafile 13 to '+HRPGNET2_DATA';
set newname for datafile 14 to '+HRPGNET2_DATA';
set newname for datafile 15 to '+HRPGNET2_DATA';
set newname for datafile 16 to '+HRPGNET2_DATA';
set newname for datafile 17 to '+HRPGNET2_DATA';
set newname for datafile 18 to '+HRPGNET2_DATA';
set newname for datafile 19 to '+HRPGNET2_DATA';
set newname for datafile 288 to '+HRPGNET2_DATA';
set newname for datafile 295 to '+HRPGNET2_DATA';
set newname for datafile 314 to '+HRPGNET2_DATA';
set newname for datafile 351 to '+HRPGNET2_DATA';
set newname for datafile 352 to '+HRPGNET2_DATA';
set newname for datafile 353 to '+HRPGNET2_DATA';
set newname for datafile 354 to '+HRPGNET2_DATA';
set newname for datafile 379 to '+HRPGNET2_DATA';
set newname for datafile 392 to '+HRPGNET2_DATA';
set newname for datafile 393 to '+HRPGNET2_DATA';
set newname for datafile 394 to '+HRPGNET2_DATA';
set newname for datafile 395 to '+HRPGNET2_DATA';
set newname for datafile 396 to '+HRPGNET2_DATA';
set newname for datafile 397 to '+HRPGNET2_DATA';
set newname for datafile 398 to '+HRPGNET2_DATA';
set newname for datafile 399 to '+HRPGNET2_DATA';
set newname for datafile 400 to '+HRPGNET2_DATA';
set newname for datafile 401 to '+HRPGNET2_DATA';
set newname for datafile 402 to '+HRPGNET2_DATA';
set newname for datafile 403 to '+HRPGNET2_DATA';
set newname for datafile 404 to '+HRPGNET2_DATA';
set newname for datafile 405 to '+HRPGNET2_DATA';
set newname for datafile 406 to '+HRPGNET2_DATA';
set newname for datafile 407 to '+HRPGNET2_DATA';
restore database;
switch datafile all;
RELEASE CHANNEL ch00;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
RELEASE CHANNEL ch06;
RELEASE CHANNEL ch07;
RELEASE CHANNEL ch08;
RELEASE CHANNEL ch09;
RELEASE CHANNEL ch10;
RELEASE CHANNEL ch11;
}

Once the restore is completed , please recover the database using recover database; command and open the database with resetlogs.


RMAN Backup script

 rman_backup.sh


. /u02/PROD_gtebsdb1p.env

CDATE=`date +%d-%m-%Y-%H-%M`

export CDATE;

mkdir -p /backups/rman/rman_backup/$CDATE

export LOGFILE=/backups/rman/rman_backup/$CDATE/rman_backup_log_${CDATE}.log

touch $LOGFILE

rman target / << EOF | tee -i $LOGFILE


run {

sql "alter system archive log current";

allocate channel t1 type disk;

allocate channel t2 type disk;

allocate channel t3 type disk;

allocate channel t4 type disk;

backup format='/backups/rman/rman_backup/$CDATE/Tp_cntl_%Y%M%D_%d_%s_%p_1.ctl' current controlfile;

backup as compressed backupset format='/backups/rman/rman_backup/$CDATE/%U' database;

backup format='/backups/rman/rman_backup/$CDATE/%d_%s_%p_%c_%t.arc' archivelog FROM TIME 'SYSDATE-2' UNTIL TIME 'SYSDATE';

backup format='/backups/rman/rman_backup/$CDATE/Tp_cntl_%Y%M%D_%d_%s_%p_2.ctl' current controlfile;

release channel t1;

release channel t2;

release channel t3;

release channel t4;

}

exit

Recovering database till PITR.

rman target / log=/u08/PBIDW/RMAN/SBIDW_BACKUP/rman_recover.log << EOF
run
{
ALLOCATE CHANNEL c1 DEVICE TYPE DISK;
ALLOCATE CHANNEL c2 DEVICE TYPE DISK;
ALLOCATE CHANNEL c3 DEVICE TYPE DISK;
SET ARCHIVELOG DESTINATION to '/u08/PBIDW/RMAN/SBIDW_BACKUP';
RECOVER DATABASE UNTIL TIME "TO_DATE('2015-06-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')";
release channel c1;
release channel c2;
release channel c3;

RMAN DUPLICATE / CLONE RAC DATABASE


RMAN RAC Database backup pieces are available in below location.

RMAN Backup ==> /backup/rman_1060/proddb/db_bkp_prod


Cloning the target database using RMAN backup.


rman
connect auxiliary /
run 
{
ALLOCATE auxiliary CHANNEL ch01 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch02 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch03 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch04 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch05 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch06 TYPE DISK;
DUPLICATE DATABASE TO DEVDB BACKUP LOCATION '/backup/rman_1060/proddb/db_bkp_prod' NOFILENAMECHECK;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
RELEASE CHANNEL ch06;
}

Getting archives from our tapes to backup locations using sequence number

rman target /

RMAN > connect catalog rmanrail/rman007@atlrxxp1

connected to recovery catalog database

RMAN >run
{

set archivelog destination to '/proddb/dbdata/data1/arch20jun';
restore archivelog from logseq=81066 until logseq=81072;

}

Create standby from backup base location

Run below command in standby database.



rman auxiliary / |tee restore.new.log
run {
ALLOCATE auxiliary CHANNEL ch01 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch02 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch03 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch04 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch05 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch06 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch07 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch08 TYPE DISK;
duplicate target database for standby backup location '/t02/RMAN_BACKUP' NOFILENAMECHECK;
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
RELEASE CHANNEL ch06;
RELEASE CHANNEL ch07;
RELEASE CHANNEL ch08;
}

rman duplicate command

To clone target database from source.

Run below command in target database.


rman log /u01/oracle/recovery.log

connect target sys/Nex#us1as3p@PRODDB
 
connect catalog rman/rman007@atlrmxx1
 
connect auxiliary /

run {
ALLOCATE auxiliary CHANNEL ch00 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch01 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch02 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch03 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch04 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch05 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch06 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch07 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch08 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch09 TYPE DISK;
ALLOCATE auxiliary CHANNEL ch10 TYPE DISK;
set until time '12-NOV-2012 15:17:00';
duplicate target database to TESTDB;    
RELEASE CHANNEL ch01;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
RELEASE CHANNEL ch06;
RELEASE CHANNEL ch07;
RELEASE CHANNEL ch08;
RELEASE CHANNEL ch09;
RELEASE CHANNEL ch10;
}

rman rac compressed database backup along with controlfile and archives .

 rman log=/backup/PRODDB/30Dec10_zp1/RMAN_FullBkp_prod.log
connect target /
run {
ALLOCATE CHANNEL ch02 TYPE DISK;
ALLOCATE CHANNEL ch03 TYPE DISK;
ALLOCATE CHANNEL ch04 TYPE DISK;
ALLOCATE CHANNEL ch05 TYPE DISK;
ALLOCATE CHANNEL ch06 TYPE DISK;
ALLOCATE CHANNEL ch07 TYPE DISK;
ALLOCATE CHANNEL ch08 TYPE DISK;
ALLOCATE CHANNEL ch09 TYPE DISK;
ALLOCATE CHANNEL ch10 TYPE DISK;
ALLOCATE CHANNEL ch11 TYPE DISK;
BACKUP AS COMPRESSED BACKUPSET FORMAT '/backup/PRODDB/30Dec10_zp1/%d_db_u%u_s%s_p%p_t%t_db' FILESPERSET 10 database plus archivelog ;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
RELEASE CHANNEL ch06;
RELEASE CHANNEL ch07;
RELEASE CHANNEL ch08;
RELEASE CHANNEL ch09;
RELEASE CHANNEL ch10;
RELEASE CHANNEL ch11;
}
sql 'alter system archive log current';
BACKUP FORMAT '/backup/PRODDB/30Dec10_zp1/Control_bk_u%u_s%s_p%p_t%t_bk' TAG 'PRODDB_ctrl' CURRENT CONTROLFILE;
exit;

Take RMAN backup on desire location

rman log=/backup/PRODDB/db_backup_PRODDB/hot_bkp_prod.log
connect target /
run {
ALLOCATE CHANNEL ch02 TYPE DISK;
ALLOCATE CHANNEL ch03 TYPE DISK;
ALLOCATE CHANNEL ch04 TYPE DISK;
ALLOCATE CHANNEL ch05 TYPE DISK;
ALLOCATE CHANNEL ch06 TYPE DISK;
ALLOCATE CHANNEL ch07 TYPE DISK;
ALLOCATE CHANNEL ch08 TYPE DISK;
ALLOCATE CHANNEL ch09 TYPE DISK;
ALLOCATE CHANNEL ch10 TYPE DISK;
ALLOCATE CHANNEL ch11 TYPE DISK;
BACKUP FORMAT '/backup/PRODDB/db_backup_PRODDB/%d_db_u%u_s%s_p%p_t%t_db' FILESPERSET 10 database plus archivelog;
RELEASE CHANNEL ch02;
RELEASE CHANNEL ch03;
RELEASE CHANNEL ch04;
RELEASE CHANNEL ch05;
RELEASE CHANNEL ch06;
RELEASE CHANNEL ch07;
RELEASE CHANNEL ch08;
RELEASE CHANNEL ch09;
RELEASE CHANNEL ch10;
RELEASE CHANNEL ch11;
}
sql 'alter system archive log current';
BACKUP FORMAT '/backup/PRODDB/db_backup_PRODDB/Control_bk_u%u_s%s_p%p_t%t_bk' TAG 'PRODDB_ctrl' CURRENT CONTROLFILE;
exit;

Monday, 27 January 2025

Script to take backup of schemas and objects

#!/bin/bash

# Ask for Oracle environment variables
read -p "Enter your ORACLE_HOME path: " ORACLE_HOME
read -p "Enter your ORACLE_SID: " ORACLE_SID

# Set Oracle environment variables
export ORACLE_HOME
export ORACLE_SID
export PATH=$ORACLE_HOME/bin:$PATH

# Get the current date format for naming backups
DATEFORMAT=$(date +"%Y%m%d_%H%M%S")

# Show menu options
echo "Select the type of backup:"
echo "1) Single schema backup"
echo "2) Multiple schema backup (comma separated)"
echo "3) Single table backup"
echo "4) Multiple table backup (comma separated)"

# Read user input
read -p "Enter your choice (1/2/3/4): " choice

case $choice in
  1)
    # Single schema backup
    read -p "Enter the schema name: " schema_name
    echo "Starting backup for schema $schema_name at $(date)" >> nohup.out

    # Run expdp command for single schema
    expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${schema_name}_$DATEFORMAT_%U.dmp logfile=expdp_${schema_name}_$DATEFORMAT.log schemas=$schema_name parallel=4 cluster=n >> nohup.out 2>&1
    ;;
  
  2)
    # Multiple schema backup
    read -p "Enter the schema names (comma separated): " schema_names
    IFS=',' read -r -a SCHEMA_ARRAY <<< "$schema_names"

    for schema in "${SCHEMA_ARRAY[@]}"; do
      echo "Starting backup for schema $schema at $(date)" >> nohup.out

      # Run expdp command for each schema
      expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${schema}_$DATEFORMAT_%U.dmp logfile=expdp_${schema}_$DATEFORMAT.log schemas=$schema parallel=4 cluster=n >> nohup.out 2>&1
    done
    ;;
  
  3)
    # Single table backup
    read -p "Enter the table name (schema.tablename): " table_name
    echo "Starting backup for table $table_name at $(date)" >> nohup.out

    # Run expdp command for single table
    expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${table_name}_$DATEFORMAT_%U.dmp logfile=expdp_${table_name}_$DATEFORMAT.log tables=$table_name parallel=4 cluster=n >> nohup.out 2>&1
    ;;
  
  4)
    # Multiple table backup
    read -p "Enter the table names (schema.tablename1,schema.tablename2,...): " table_names
    IFS=',' read -r -a TABLE_ARRAY <<< "$table_names"

    for table in "${TABLE_ARRAY[@]}"; do
      echo "Starting backup for table $table at $(date)" >> nohup.out

      # Run expdp command for each table
      expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${table}_$DATEFORMAT_%U.dmp logfile=expdp_${table}_$DATEFORMAT.log tables=$table parallel=4 cluster=n >> nohup.out 2>&1
    done
    ;;
  
  *)
    echo "Invalid choice. Please run the script again and select a valid option."
    exit 1
    ;;
esac

echo "Backup process completed at $(date)." >> nohup.out

Friday, 24 January 2025

Script to take backup of multiple schemas.

#!/bin/bash

# Prompt for necessary inputs
read -p "Enter schemas to backup (comma-separated): " schema_name
read -p "Enter ORACLE_SID: " ORACLE_SID
read -p "Enter ORACLE_HOME path: " ORACLE_HOME

# Input for transferring files to the target server
read -p "Enter target server IP: " DEST_IP
read -p "Enter target directory path: " DEST_PATH

# Date format for filenames
DATEFORMAT=$(date +%Y%m%d%H%M%S)

# Loop through schemas and perform export
IFS=',' read -r -a SCHEMA_ARRAY <<< "$schema_name"
for schema in "${SCHEMA_ARRAY[@]}"; do
  echo "Starting backup for schema $schema at $(date)" >> nohup.out

  # Run expdp command with %U option for multiple dump files
  expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${schema}_$DATEFORMAT_%U.dmp logfile=expdp_${schema}_$DATEFORMAT.log schemas=$schema parallel=4 cluster=n >> nohup.out 2>&1

  # Check for successful export
  if [ $? -eq 0 ]; then
    echo "Export for $schema completed." >> nohup.out
    # Transfer dump and log files
    scp OBACKUP/expdp_${schema}_$DATEFORMAT_* oracle@$DEST_IP:$DEST_PATH >> nohup.out 2>&1
    if [ $? -eq 0 ]; then
      echo "Files for $schema transferred successfully." >> nohup.out
    else
      echo "File transfer failed for $schema." >> nohup.out
    fi
  else
    echo "Export failed for $schema." >> nohup.out
  fi
done

echo "Backup script completed at $(date)" >> nohup.out

Tuesday, 21 January 2025

Find old database backup objects and delete them to get memory back.

 -- Query to find backup objects (tables with '_bak' or '_backup' in their names)

SELECT owner, table_name 

FROM dba_tables

WHERE table_name LIKE '%_bak%' OR table_name LIKE '%_backup%';



-- Query to find backup views

SELECT owner, view_name 

FROM dba_views

WHERE view_name LIKE '%_bak%' OR view_name LIKE '%_backup%';

Database Health Check Script

-- DBA Health Check Script for Oracle Database

-- Step 1: Check Database Instance and Version
-- Check Oracle Database version and instance status
SELECT * FROM v$version;
SELECT instance_name, status FROM v$instance;

-- Step 2: Check Database Status
-- Ensure that the database is open and running
SELECT status FROM v$database;

-- Step 3: Check Tablespace Usage
-- Check for space usage and growth of tablespaces
SELECT tablespace_name, 
       total_blocks * 8 / 1024 / 1024 AS "Total_MB",
       free_blocks * 8 / 1024 / 1024 AS "Free_MB",
       (total_blocks - free_blocks) * 8 / 1024 / 1024 AS "Used_MB",
       ROUND(((total_blocks - free_blocks) * 100) / total_blocks, 2) AS "Usage(%)"
FROM v$temp_space_header;

SELECT tablespace_name, 
       ROUND(SUM(bytes) / 1024 / 1024, 2) AS "Used_MB",
       ROUND(SUM(maxbytes) / 1024 / 1024, 2) AS "Max_MB",
       ROUND((SUM(bytes) / SUM(maxbytes)) * 100, 2) AS "Used(%)"
FROM dba_data_files
GROUP BY tablespace_name;

-- Step 4: Check for Expired or Invalid Objects
-- It's important to check for invalid objects and make sure everything is valid
SELECT owner, object_type, object_name
FROM dba_objects
WHERE status = 'INVALID';

-- Step 5: Check for Locked Sessions
-- This will show if any sessions are being locked, which could affect performance
SELECT sid, serial#, username, osuser, status, machine, program
FROM v$session
WHERE blocking_session IS NOT NULL;

-- Step 6: Check for Long Running Queries
-- Monitor long running queries which could be resource hogs
SELECT s.sid, s.serial#, s.username, q.sql_text, s.status, s.osuser, s.program
FROM v$session s, v$sql q
WHERE s.sql_id = q.sql_id
AND s.status = 'ACTIVE'
AND s.username IS NOT NULL
ORDER BY s.sid;

-- Step 7: Check Redo Log Status
-- It's important to monitor redo logs for any potential issues
SELECT member, status FROM v$logfile;

-- Step 8: Check Oracle Alert Log for Errors
-- Check the alert log for recent errors and important messages
-- Check the alert log using the following:
-- $ tail -n 100 $ORACLE_BASE/diag/rdbms/<db_name>/<instance_name>/alert_<instance_name>.log

-- Step 9: Check Backup Status
-- Ensure that backups are scheduled and have been successfully completed
SELECT * FROM dba_backup_set WHERE status = 'COMPLETED' ORDER BY completion_time DESC FETCH FIRST 5 ROWS ONLY;

-- Step 10: Check the Listener Status
-- Ensure that the listener is up and running
SELECT * FROM dba_listener_status;

-- Step 11: Check for Database Jobs and their Status
-- You should check for any long-running or failed jobs in Oracle DBMS_SCHEDULER
SELECT job_name, status, last_start_date, next_run_date
FROM dba_scheduler_jobs;

-- Step 12: Check for Tables with Excessive Rows (Potentially Large Tables)
-- Identify tables with very large numbers of rows that may need partitioning or optimization
SELECT table_name, num_rows
FROM all_tables
WHERE owner NOT IN ('SYS', 'SYSTEM') 
AND num_rows > 1000000  -- Adjust the threshold based on your environment
ORDER BY num_rows DESC;

-- Step 13: Check for Fragmentation in Tablespaces
-- Check for tablespaces with high fragmentation or those requiring cleanup
SELECT tablespace_name, 
       ROUND(SUM(bytes) / 1024 / 1024, 2) AS "Free_MB",
       ROUND(SUM(maxbytes) / 1024 / 1024, 2) AS "Max_MB",
       ROUND((SUM(bytes) / SUM(maxbytes)) * 100, 2) AS "Used(%)"
FROM dba_free_space
GROUP BY tablespace_name;

-- Step 14: Check System Resource Usage
-- Check CPU, memory, and IO usage on the server to ensure no system resource issues
-- This would need to be done on the OS level, not directly through SQL.
-- For Linux:
-- $ top -n 1
-- $ vmstat 1 5
-- $ iostat

-- Step 15: Check for Datafile Growth
-- Check datafiles for growth and ensure there's enough space
SELECT file_name, ROUND(bytes / 1024 / 1024 / 1024, 2) AS "Size_GB"
FROM dba_data_files;

-- Step 16: Check for Pending Table/Index/Partition Stats
-- Ensure that statistics are up-to-date to improve query performance
SELECT table_name, num_rows, last_analyzed FROM dba_tables WHERE owner = 'SCHEMA_NAME' AND (last_analyzed IS NULL OR num_rows = 0);

-- Step 17: Check Database Users and Profiles
-- Monitor active database users and their profiles
SELECT username, profile FROM dba_users;

-- Step 18: Check for High Watermark for Undo Segments
-- Check undo segments and high-water mark in undo tablespace
SELECT tablespace_name, file_id, block_id, blocks
FROM v$rollstat;

-- Step 19: **Monitor Index Health**
-- Check for missing indexes on frequently queried tables (tables with no indexes at all)
SELECT owner, table_name, COUNT(*) AS missing_indexes
FROM dba_tables t
LEFT JOIN dba_indexes i ON t.table_name = i.table_name AND t.owner = i.owner
WHERE i.index_name IS NULL
AND t.owner NOT IN ('SYS', 'SYSTEM')
GROUP BY owner, table_name
ORDER BY missing_indexes DESC;

-- Step 20: **Monitor Fragmented Indexes**
-- Identify fragmented indexes which may need rebuilding or optimization
SELECT index_name, table_name, 
       ROUND((leaf_blocks * 100) / blocks) AS "Fragmentation(%)"
FROM dba_indexes
WHERE table_owner NOT IN ('SYS', 'SYSTEM')
AND ROUND((leaf_blocks * 100) / blocks) > 30  -- Adjust threshold for fragmentation
ORDER BY "Fragmentation(%)" DESC;

-- Step 21: **Monitor Index Usage**
-- Identify indexes that are not being used, potentially candidates for removal
SELECT * 
FROM dba_indexes
WHERE status = 'UNUSED';

-- Step 22: **Gather Database Statistics (GSS)**
-- Regularly gathering statistics helps Oracle optimize query plans and overall performance.
-- This section will gather stats for tables and indexes. Make sure the relevant schema is specified for better control.
-- Use DBMS_STATS to gather stats for all schemas or specific objects:
BEGIN
  -- Gather stats for all schemas in the database
  DBMS_STATS.GATHER_DATABASE_STATS;
  -- Alternatively, gather stats for a specific schema (replace SCHEMA_NAME)
  -- DBMS_STATS.GATHER_SCHEMA_STATS('SCHEMA_NAME');
END;
/

-- Step 23: **Index Usage Stats:**
-- Monitor Index usage statistics to identify unused or underperforming indexes
SELECT index_name, 
       object_name, 
       object_type, 
       leaf_blocks, 
       distinct_keys, 
       avg_leaf_blocks_per_key, 
       num_rows
FROM dba_indexes
WHERE table_owner NOT IN ('SYS', 'SYSTEM')
AND (distinct_keys = 0 OR avg_leaf_blocks_per_key > 10);  -- Adjust based on your system needs

-- Step 24: **Monitor Disabled Triggers**
-- Check if there are any triggers that are disabled on any tables. Disabled triggers could indicate issues in the application or unwanted side effects.
SELECT owner, table_name, trigger_name, status
FROM dba_triggers
WHERE status = 'DISABLED'
AND owner NOT IN ('SYS', 'SYSTEM')
ORDER BY owner, table_name;

Monday, 20 January 2025

Transfer file from Linux server to Windows server



 #!/bin/bash

# Windows machine details
WIN_USER="wasiq"              # Your Windows username
WIN_PASS="password123"        # Your Windows password
WIN_HOST="192.168.1.100"      # IP address or hostname of the Windows machine (can also use the Windows machine's hostname)
WIN_REMOTE_DIR="C:/Users/wasiq/Desktop"  # The directory on your Windows machine where the file will be saved

# Linux details
LINUX_FILE="/path/to/linux/file/myfile.tar.gz"  # The file on your Linux server you want to transfer

# Use sshpass to provide the password and run the sftp command
sshpass -p "${WIN_PASS}" sftp -v ${WIN_USER}@${WIN_HOST} << EOT
lcd ${WIN_REMOTE_DIR}      # Local directory on Windows where the file will be saved
get "${LINUX_FILE}"        # Transfer the file from the Linux server to the Windows machine
bye                        # End the sftp session
EOT



Script to transfer file windows to Linux server

vi transfer_file.sh

 #!/bin/bash

# Prompt for file path and server details
read -p "Enter the full path of the file to transfer (e.g., C:/Users/YourUsername/Desktop/myfile.txt): " file_path
read -p "Enter the Linux server username (e.g., oracle): " linux_user
read -p "Enter the Linux server IP address (e.g., 192.168.1.1): " linux_server
read -p "Enter the target directory on the Linux server (e.g., /tmp): " target_dir

# Path to the pscp executable (Make sure you have pscp.exe from the PuTTY installation)
PSCP_PATH="/path/to/pscp.exe"

# Use pscp to transfer the file
"$PSCP_PATH" "$file_path" "$linux_user"@"$linux_server":"$target_dir"

# Check if the transfer was successful
if [ $? -eq 0 ]; then
  echo "File transferred successfully!"
else
  echo "File transfer failed. Please check your details."
fi


bash transfer_file.sh




Transfer_file_with_directories.sh

@echo off
setlocal

REM Set your variables
set SERVER=your_linux_server_address
set USER=your_username
set PASSWORD=your_password
set LOCAL_FOLDER=C:\path\to\your\local\
set REMOTE_DIR=/path/to/remote/directory/

REM Set the path to WinSCP executable (make sure WinSCP is installed and the path is correct)
set WINSCP_PATH="C:\Program Files (x86)\WinSCP\WinSCP.exe"

REM Create a temporary script file for WinSCP commands
echo open sftp://%USER%:%PASSWORD%@%SERVER% > transfer_script.txt
echo option batch abort >> transfer_script.txt
echo option confirm off >> transfer_script.txt

REM Recursive file and directory transfer
echo Synchronizing local folder to remote directory... >> transfer_script.txt
echo synchronize remote "%LOCAL_FOLDER%" "%REMOTE_DIR%" >> transfer_script.txt

REM Execute the transfer using WinSCP
"%WINSCP_PATH%" /script=transfer_script.txt

REM Check if the transfer was successful
if %ERRORLEVEL% EQU 0 (
    echo File transfer completed successfully.
) else (
    echo File transfer failed. Please check the log for errors.
)

REM Cleanup: Remove the temporary script file
del transfer_script.txt

endlocal




Sunday, 19 January 2025

Script to take backup of schema

vi backup_schema.sh

#!/bin/sh
# Prompt for user input for Schema name, Oracle SID, and Oracle Home
read -p "Enter schema name: " schema_name
read -p "Enter ORACLE_SID: " ORACLE_SID
read -p "Enter ORACLE_HOME path: " ORACLE_HOME

# Validate the inputs
if [ -z "$schema_name" ] || [ -z "$ORACLE_SID" ] || [ -z "$ORACLE_HOME" ]; then
  echo "All inputs (schema name, ORACLE_SID, ORACLE_HOME) are required. Exiting..." >> nohup.out
  exit 1
fi

# Set environment variables
export ORACLE_SID
export ORACLE_HOME
export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/lib:/usr/ccs/bin:$ORACLE_HOME/OPatch
DATEFORMAT=`date +%Y%m%d`

# Data Pump export command
echo "Starting backup for schema $schema_name at $(date)" >> nohup.out
expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${schema_name}_`echo $DATEFORMAT`.dmp logfile=expdp_${schema_name}_`echo $DATEFORMAT`.log schemas=$schema_name parallel=8 cluster=n flashback_time=systimestamp >> nohup.out 2>&1

# Check if the expdp command was successful
if [ $? -eq 0 ]; then
  echo "Backup of schema $schema_name completed successfully at $(date)." >> nohup.out
else
  echo "Backup failed for schema $schema_name. Please check the log file for details." >> nohup.out
  exit 1
fi


How to run the script ?

$ ./backup_schema.sh
Enter schema name: HR
Enter ORACLE_SID: orcl
Enter ORACLE_HOME path: /u01/app/oracle/product/12.2.0.1/dbhome_1




Take backup of schema using data pump utility ,,, just you need to pass schema name.

Just input schema name and it will take backup of schema.


vi backup_schema.sh

 #!/bin/sh

# Set environment variables
export ORACLE_SID=testdb1
export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/test-db
export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/lib:/usr/ccs/bin:$ORACLE_HOME/OPatch
DATEFORMAT=`date +%Y%m%d`

# Check if schema name is passed as an argument
if [ -z "$1" ]; then
  echo "Schema name is required. Exiting..." >> nohup.out
  exit 1
fi

# Assign the first argument as the schema name
schema_name=$1

# Data Pump export command
echo "Starting backup for schema $schema_name at $(date)" >> nohup.out
expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_${schema_name}_`echo $DATEFORMAT`.dmp logfile=expdp_${schema_name}_`echo $DATEFORMAT`.log schemas=$schema_name parallel=8 cluster=n flashback_time=systimestamp >> nohup.out 2>&1

# Check if the expdp command was successful
if [ $? -eq 0 ]; then
  echo "Backup of schema $schema_name completed successfully at $(date)." >> nohup.out
else
  echo "Backup failed for schema $schema_name. Please check the log file for details." >> nohup.out
  exit 1
fi


Use below command to run the script ...

nohup ./backup_schema.sh schemaname &

Query to display Non Default Oracle Database users

 SELECT username, account_status, profile FROM dba_users WHERE username NOT IN ( 'SYS', 'SYSTEM', 'OUTLN', 'XDB', 'DBSNMP', 'APPQOSSYS', 'APEX_040000', 'ORDDATA', 'WMSYS', 'DGSYS', 'CTXSYS', 'MDSYS', 'OLAPSYS', 'ORDDATA', 'SYSMAN', 'RMSYS', 'SPATIAL_CSW_USER', 'SPATIAL_WMS_USER', 'SPATIAL_WFS_USER', 'SPATIAL_WMS_USER', 'XDB', 'MDSYS' ) ORDER BY username;

Saturday, 18 January 2025

Compile invalid objects for a particular schema

 select 'alter '||decode(object_type,'PACKAGE BODY','PACKAGE','UNDEFINED','MATERIALIZED VIEW',object_type)||' '||object_name||' compile;' from dba_objects where status='INVALID' and owner= '&TB';

Last10 execution of the SQL query

set lines 132
col sql_id head "SQL ID"
col exec_time head "Exec Time (min)"
col exec_date head "Exec Date"
col sid head "SID"
col status head "Status"

accept sql_id prompt 'Enter SQL ID: '  -- User Input for SQL ID

select rownum, 
       s.sid, 
       q.sql_id, 
       to_char(q.first_load_time, 'mm/dd/yy hh24:mi:ss') as exec_date, 
       round(q.execution_time/60, 2) as exec_time,  -- Execution time in minutes
       ses.status  -- Status of the session
from (
  select sql_id, 
         first_load_time,
         execution_time  -- Total execution time in seconds
  from v$sql_statistics
  where sql_id = '&sql_id'
  order by first_load_time desc
) q
left join v$session s on s.sql_id = q.sql_id
left join v$session ses on ses.sid = s.sid
where rownum <= 10;  -- Display only the last 10 executions

Script to check locks in the database

 set serveroutput on size 10000

declare

   cursor c1 is

   select * from v$lock where request != 0

   order by id1, id2;

   wid1            number := -999999;

   wid2            number := -999999;

   wholder_detail  varchar2(200);

   v_err_msg       varchar2(80);

   wsid            number(5);

   wstep           number(2);

   wtype           varchar2(10);

   wobject_name    varchar2(180);

   wobject_name1   varchar2(80);

   wlock_type      varchar2(50);

   w_lastcallet    varchar2(11);

   h_lastcallet    varchar2(11);


   -- Dynamically fetch the machine name without hardcoding any prefix

   machine_name    varchar2(100);


   -- Added by Harish, 24-Feb-2010

   cursor c2 (v_id1 v$lock.ID1%TYPE, v_id2 v$lock.ID2%TYPE) is 

   select sid , type from v$lock

   where id1 = v_id1

     and id2 = v_id2

     and request = 0

     and lmode != 4;


begin

   -- Get the machine name dynamically from the current session

   select machine into machine_name from v$session where rownum = 1;


   -- Now the 'machine_name' will contain the dynamic machine name from v$session


   for c1_rec in c1 loop

      if c1_rec.id1 = wid1 and c1_rec.id2 = wid2 then

         null;

      else

         wstep := 10;


         -- Begin Added by ab

         for c2_rec in c2(c1_rec.id1, c1_rec.id2) loop

            wsid := c2_rec.sid;

            wtype := c2_rec.type;


            -- Output holder session details

            dbms_output.put_line('  ');

            wstep := 20;

            select 'Holder DBU: ' || s.username || ' OSU: ' || s.osuser || ' DBP:' || p.spid || ' APP: ' || s.process ||

                   ' SID:' || s.sid || ' Status: ' || s.status ||

                   ' (' || floor(last_call_et/3600) || ':' ||

                   floor(mod(last_call_et, 3600)/60) || ':' ||

                   mod(mod(last_call_et, 3600), 60) ||

                   ') Module:' || module ||

                   ' AppSrvr: ' || substr(s.machine, 1, 15)  -- No prefix replacement, just display the machine

            into wholder_detail

            from v$session s, v$process p

            where s.sid = wsid

              and s.paddr = p.addr;

            dbms_output.put_line(wholder_detail);

            

            -- Lock type and object info

            begin

               select decode(wtype,'TX', 'Transaction',

                              'DL', 'DDL Lock',

                              'MR', 'Media Recovery',

                              'RT', 'Redo Thread',

                              'UN', 'User Name',

                              'TX', 'Transaction',

                              'TM', 'DML',

                              'UL', 'PL/SQL User Lock',

                              'DX', 'Distributed Xaction',

                              'CF', 'Control File',

                              'IS', 'Instance State',

                              'FS', 'File Set',

                              'IR', 'Instance Recovery',

                              'ST', 'Disk Space Transaction',

                              'TS', 'Temp Segment',

                              'IV', 'Library Cache Invalidation',

                              'LS', 'Log Start or Switch',

                              'RW', 'Row Wait',

                              'SQ', 'Sequence Number',

                              'TE', 'Extend Table',

                              'TT', 'Temp Table',

                              'Un-Known Type of Lock')

               into wlock_type

               from dual;

               

               declare

                  cursor c3 is

                  select object_id from v$locked_object

                  where session_id = wsid;

               begin

                  wobject_name := '';

                  for c3_rec in c3 loop

                     select object_type || ': ' || owner || '.' || object_name

                     into wobject_name

                     from dba_objects

                     where object_id = c3_rec.object_id;

                     wobject_name := wobject_name || ' ' || wobject_name1;

                  end loop;

               exception

                  when others then

                     wobject_name := wobject_name || ' No Object Found';

               end;

               dbms_output.put_line('Lock Held: ' || wlock_type || ' for Object :' || wobject_name);

            exception

               when no_data_found then

                  dbms_output.put_line('Lock Held: ' || wlock_type || ' No object found in DBA Objects');

            end;

         end loop;

      end if;

      wstep := 30;


      -- Requestor session details

      select '....   Requestor DBU: ' || s.username || ' OSU: ' || s.osuser || ' DBP:' || p.spid || ' APP: ' || s.process ||

             ' SID:' || s.sid || ' Status: ' || s.status ||

             ' (' || floor(last_call_et/3600) || ':' ||

             floor(mod(last_call_et, 3600)/60) || ':' ||

             mod(mod(last_call_et, 3600), 60) ||

             ') Module:' || module ||

             ' AppSrvr: ' || substr(s.machine, 1, 15)  -- No prefix replacement, just display the machine

      into wholder_detail

      from v$session s, v$process p

      where s.sid = c1_rec.sid

        and s.paddr = p.addr;

      dbms_output.put_line(wholder_detail);


      wid1 := c1_rec.id1;

      wid2 := c1_rec.id2;

   end loop;


   -- No locks held

   if wid1 = -999999 then

      wstep := 40;

      dbms_output.put_line('No one requesting locks held by others');

   end if;


exception

   when others then

      v_err_msg := (sqlerrm || '  ' || sqlcode || ' step=' || to_char(wstep));

      DBMS_OUTPUT.PUT_LINE(v_err_msg);

end;

/


How to disable Auto Task Tunning jobs...

 set lines 180 pages 1000
col client_name for a40
col attributes for a60
select client_name, status,attributes,service_name from dba_autotask_client;
CLIENT_NAME                              STATUS   ATTRIBUTES                                                   SERVICE_NAME
---------------------------------------- -------- ------------------------------------------------------------ ----------------------------------------------------------------
auto optimizer stats collection          DISABLED ON BY DEFAULT, VOLATILE, SAFE TO KILL
auto space advisor                       ENABLED  ON BY DEFAULT, VOLATILE, SAFE TO KILL
sql tuning advisor                       ENABLED  ONCE PER WINDOW, ON BY DEFAULT, VOLATILE, SAFE TO KILL


BEGIN
DBMS_AUTO_TASK_ADMIN.disable(
client_name => 'auto space advisor',
operation   => NULL,
window_name => NULL);
END;

PL/SQL procedure successfully completed.

BEGIN
  DBMS_AUTO_TASK_ADMIN.disable(
    client_name => 'sql tuning advisor',
    operation   => NULL,
    window_name => NULL);
END;  
/

PL/SQL procedure successfully completed.

BEGIN
DBMS_AUTO_TASK_ADMIN.disable(
client_name => 'auto optimizer stats collection',
operation   => NULL,
window_name => NULL);
END;
/

PL/SQL procedure successfully completed.

SQL> select client_name, status,attributes,service_name from dba_autotask_client;
CLIENT_NAME                              STATUS   ATTRIBUTES                                                   SERVICE_NAME
---------------------------------------- -------- ------------------------------------------------------------ ----------------------------------------------------------------
auto optimizer stats collection          DISABLED ON BY DEFAULT, VOLATILE, SAFE TO KILL
auto space advisor                       DISABLED ON BY DEFAULT, VOLATILE, SAFE TO KILL
sql tuning advisor                       DISABLED ONCE PER WINDOW, ON BY DEFAULT, VOLATILE, SAFE TO KILL

SQL>

Script to take backup of multiple schema using shell script

 $crontab -l

00 04 * * * /ora/app/oracle/refresh/expdp_testdb_SIT.sh 1>/dev/null 2>&1


Script

cat /ora/app/oracle/refresh/expdp_testdb_SIT.sh
#!/bin/sh
export ORACLE_SID=testdb1
export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/test-db
export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/lib:/usr/ccs/bin:$ORACLE_HOME/OPatch
DATEFORMAT=`date +%Y%m%d`

# Data Pump export
expdp \"/ as sysdba\" directory=OBACKUP dumpfile=expdp_SIT_`echo $DATEFORMAT`.dmp logfile=expdp_SIT_`echo $DATEFORMAT`.log schemas=ACS,AU,EPG,FP,FRAUD,MAIN,SMSGATE,SSO,SVAB,SVCG,SVCGUP,SVCM,SVIP,SVISTA,SVMP,SVMP_IG,SVRB,SVWEB,SVWFE,WEBGATE parallel=8 cluster=n flashback_time=systimestamp

Monitoring Data pump Jobs

select sid, serial#, sofar, totalwork, dp.owner_name, dp.state, dp.job_mode
from gv$session_longops sl, gv$datapump_job dp
where sl.opname = dp.job_name and sofar != totalwork;

Backup Database Link Before Cloning & create them back after clone.

 set pages 0
set linesize 200
set feedback off
spool db_links_backup.sql

select 'create database link ' || db_link || ' connect to ' || username || ' identified by <password> using ''' || host || ''';'
from dba_db_links
where owner = 'PUBLIC';

spool off;

Script to remove archives from Solaris Operating system

 #Author : Abdul Wasiq

CDATE=`date +%d-%m-%Y`
{

echo " Count of archivelog file olderthan 3 days    "
echo " ---------------------------------------------------------- "
/usr/bin/find /u02/oradata/db/archivelog/ -mtime +3 |wc -l
echo "                                                             "
echo " ----------------------------------------------------------- "
echo " List of 3 days old archivelog files "
echo " ----------------------------------------------------------- "
/usr/bin/find /u02/oradata/db/archivelog/ -mtime +3 -exec ls -lrth '{}' \;
echo " ----------------------------------------------------------- "
echo " Removing 3 days old archivelog files  "
echo " ----------------------------------------------------------- "
/usr/bin/find /u02/oradata/db/archivelog/ -mtime +3 -exec rm -f '{}' \;
} > /backup/scripts/logs/arch_remove_$CDATE.log


echo "==================== Completed =================="

Synonyms without a prefix or accessing the object without prefix

 set pages 0 lines 200 trims on verify off feedback off
accept grants_to  prompt 'Enter user to grant privileges: '
accept schema     prompt 'Enter schema on which to grant: '
spool tmpgrants.sql
set pages 0

-- Query for tables (views) in the specified schema
select 'grant select on ' || owner || '.' || table_name || ' to &grants_to;', chr(10),
       'create synonym &grants_to.' || table_name || ' for ' || owner || '.' || table_name || ';', chr(10)
from all_tables  -- Change from user_views to all_tables
where owner = upper('&schema')

union all

select 'grant select on ' || owner || '.' || view_name || ' to &grants_to;', chr(10),
       'create synonym &grants_to.' || view_name || ' for ' || owner || '.' || view_name || ';', chr(10)
from all_views  -- Change from dba_views to all_views
where owner = upper('&schema');

spool off;

Database Features Usage

 set pages 999
set linesize 2000
col c1 heading 'feature'    format a45
col c2 heading 'times|used' format 999,999
col c3 heading 'first|used'
col c4 heading 'used|now'

select name c1,detected_usages  c2,first_usage_date c3,currently_used  c4 from dba_feature_usage_statistics where first_usage_date is not null;

Script to remove archives

#!/bin/bash
#
# Script to Remove archive log files
#
########################################################################## 
CDATE=`date +%d-%m-%Y`
{
echo "##########################################################################"
echo " Removing olderthan 5 days archive log files.... "
/usr/bin/find /proddb/arch/* -mtime +5 -exec ls -lrth '{}' \;
/usr/bin/find /proddb/arch/* -mtime +5 -exec rm -f '{}' \;
echo "=================== END ========================"
} > /backup/logs/arch_remove_$CDATE.log         

Script to move files to another directory

This script is designed and tested on IBM AIX machine.


#!/bin/ksh
export INSTANCE=$(hostname -s)
export MAILFROM=$INSTANCE'@idhar.udhar.com'
SUBJECT="`hostname` Weblogic old files found in u01 and moved to another directory wllogs"
TO=dbs@idhar.udhar.com

echo "File system usage before moving the files to another directory \n \n" >> /home/oracle/scripts/Filesystem_usage.txt

df -gt /u01 /wllogs >> /home/oracle/scripts/Filesystem_usage.txt

export srcdir=/u01/Oracle/Middleware/Oracle_Home/user_projects/domains/base_domain
setTag=`date '+%d%b%y_%H%M%S_weblogic_oldlogs'`
destdir="/wllogs/CRON_WEBLOGIC_LOGS/$setTag"
export destdir
mkdir $destdir

find $srcdir -name "heapdump.*" -mtime +5 -exec mv {} $destdir \;

sleep 10
echo "File system usage after moving the files to wllogs \n \n" >>/home/oracle/scripts/Filesystem_usage.txt

df -gt /u01 /wllogs >> /home/oracle/scripts/Filesystem_usage.txt
echo "===================="   >>/home/oracle/scripts/Filesystem_usage.txt
cd $destdir

echo "Below files are moved to wllogs from u01 \n \n" >> /home/oracle/scripts/Filesystem_usage.txt
ls -ltr >> /home/oracle/scripts/Filesystem_usage.txt
mailx -s "$SUBJECT" -r "$MAILFROM" $TO < /home/oracle/scripts/Filesystem_usage.txt

Gather Stats for schemas




set serveroutput on;
set echo on;
set timing on;
set time on;
EXECUTE dbms_stats.set_global_prefs('DEGREE', DBMS_STATS.AUTO_DEGREE);
EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS(ownname => 'MERCH_BANK_DHL', estimate_percent => dbms_stats.auto_sample_size, CASCADE=> true, GRANULARITY=> 'ALL');
EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS(ownname => 'MERCH_PROC_DHL', estimate_percent => dbms_stats.auto_sample_size, CASCADE=> true, GRANULARITY=> 'ALL');
EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS(ownname => 'MERCH_BANK_CBD', estimate_percent => dbms_stats.auto_sample_size, CASCADE=> true, GRANULARITY=> 'ALL');
EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS(ownname => 'MERCH_AA_PROC', estimate_percent => dbms_stats.auto_sample_size, CASCADE=> true, GRANULARITY=> 'ALL');
EXECUTE DBMS_STATS.GATHER_SCHEMA_STATS(ownname => 'MAPS_PG_CBD', estimate_percent => dbms_stats.auto_sample_size, CASCADE=> true, GRANULARITY=> 'ALL');
EXEC DBMS_STATS.gather_dictionary_stats;
--EXEC DBMS_STATS.gather_fixed_objects_stats;
ALTER SYSTEM FLUSH SHARED_POOL;
ALTER SYSTEM FLUSH BUFFER_CACHE;
exit;


Call above sql script through shell script and crontab entry is below the reference.


#!/bin/bash
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_2
fdate=`date '+%d-%b-%Y-%H_%M'`
/u01/app/oracle/product/12.1.0/dbhome_2/bin/sqlplus dba/dbadba098#@MERUAT2 @/orabackup/scripts/mer_pos_eee_schema_stats.sql >/orabackup/scripts/logs/mer_pos_eee_schema_stats-${fdate}-execution.log
exit;

Health Check SQL Queries

set lines 999 pages 999;
set echo off;
prompt #########################Tablespace Monitoring:#########################
select a.tablespace_name, round(assigned_space) "ALLOCATED (MB)", round((nvl(free_space,0) + INCRM)) "FREE (MB)", round((assigned_space - nvl(free_space,0) - INCRM)) "USED (MB)",
round( ((assigned_space - nvl(free_space,0) - INCRM) / assigned_space)*100) "% Full"
from(select  tablespace_name, count(*) num1, sum(DECODE(SIGN(maxbytes - bytes), -1, bytes, maxbytes))/(1024*1024) assigned_space, sum(DECODE(SIGN(maxbytes - bytes), -1, bytes, maxbytes))/(1024*1024) - sum(bytes)/(1024*1024) INCRM from dba_data_files group by tablespace_name) A,
(select  tablespace_name, count(*) num2, (sum(bytes)/(1024*1024)) free_space
from dba_free_space a group by tablespace_name) B where b.tablespace_name(+)=a.tablespace_name
union all 
select a.tablespace_name, round(assigned_space) "ALLOCATED (MB)", round((nvl(free_space,0) + INCRM)) "FREE (MB)", round((assigned_space - nvl(free_space,0) - INCRM)) "USED (MB)",
round(((assigned_space - nvl(free_space,0) - INCRM ) / assigned_space)*100) "% Full"
from(select  tablespace_name, count(*) num1, sum(DECODE(SIGN(maxbytes - bytes), -1, bytes, maxbytes))/(1024*1024) assigned_space,
Sum(Decode(Sign(Maxbytes - Bytes), -1, Bytes, Maxbytes))/(1024*1024) - Sum(Bytes)/(1024*1024) Incrm From Dba_Temp_Files Group By Tablespace_Name) A,
(Select  Tablespace_Name, Count(*) Num2, (Sum(free_space)/(1024*1024)) Free_Space
FROM DBA_TEMP_FREE_SPACE GROUP BY TABLESPACE_NAME) B WHERE B.TABLESPACE_NAME(+)=A.TABLESPACE_NAME 
Order By "% Full" Desc;

prompt #########################Process Utilization:#########################
set echo on;
SELECT INST_ID,upper(resource_name) as resource_name,current_utilization,max_utilization,initial_allocation FROM gv$resource_limit WHERE resource_name in ('processes', 'sessions');

set echo off;
prompt #########################Check archive log usage FRA:#########################
col name for a25;
set echo on;
select name,(SPACE_LIMIT/1024/1024/1024), (SPACE_USED/1024/1024/1024), (SPACE_RECLAIMABLE/1024/1024/1024) from v$recovery_file_dest;

set echo off;
prompt #########################Instance Startup :#########################
set echo on;
SELECT inst_id,INSTANCE_NAME,to_char(startup_time,'DD-MON-YYYY HH24:MI:SS') "DB Startup Time" FROM sys.gv_$instance;

set echo off;
prompt #########################ASM Disk usage:#########################
set echo on;
SELECT NAME,TOTAL_MB,FREE_MB,ROUND((1- (FREE_MB / TOTAL_MB))*100, 2)  "percentage used" FROM v$asm_diskgroup order by 4 desc;

set echo off;
prompt #########################Instance_name:#########################
set echo on;
select instance_name, host_name from gv$instance order by 1;

set echo off;
prompt #########################Open_mode:##########################
set echo on;
select name, open_mode from gv$database  order by 1;

set echo off;
prompt #########################Information About Each Container in a CDB:##########################
COLUMN NAME FORMAT A8
set echo on;
SELECT NAME, CON_ID, DBID, CON_UID, GUID FROM V$CONTAINERS ORDER BY CON_ID;

set echo off;
prompt #########################Name and Open Mode of Each PDB:#########################
COLUMN NAME FORMAT A15
COLUMN RESTRICTED FORMAT A10
COLUMN OPEN_TIME FORMAT A40
set echo on;
SELECT NAME, OPEN_MODE, RESTRICTED, OPEN_TIME FROM V$PDBS;

set echo off;
prompt #########################Locked Object:#########################
set echo on;
select object_name,os_user_name,session_id from all_objects A ,v$locked_object L where A.object_id=L.object_id group by object_name,os_user_name,session_id order by 1,3;

set echo off;
prompt #########################Block corruption:#########################
set echo on;
select count(*) from v$database_block_corruption;

set echo off;
prompt #########################Table_Stats:#########################
set echo on;
select distinct OWNER,TRUNC(LAST_ANALYZED) from DBA_TABLES where owner like '%AA%' group by owner,last_analyzed order by 1;

exit;


Copy all above queries in sql file and call from shell script for each schema in the database...

#!/bin/bash
export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_2
fdate=`date '+%d-%b-%Y-%H_%M'`
/u01/app/oracle/product/12.1.0/dbhome_2/bin/sqlplus dba/dba2025#@MERSIT2 @/orabackup/scripts/DTS_UAT_DB_MERCH_monitor.sql >/orabackup/scripts/logs/DTS_UAT_MERSIT_monitor-${fdate}-execution.log
/u01/app/oracle/product/12.1.0/dbhome_2/bin/sqlplus dba/dba2025#@MERUAT2 @/orabackup/scripts/DTS_UAT_DB_MERCH_monitor.sql >/orabackup/scripts/logs/DTS_UAT_MERUAT_monitor-${fdate}-execution.log
/u01/app/oracle/product/12.1.0/dbhome_2/bin/sqlplus dba/dba2025#@POSSIT2 @/orabackup/scripts/DTS_UAT_DB_POS_monitor.sql >/orabackup/scripts/logs/DTS_UAT_POSSIT_monitor-${fdate}-execution.log
/u01/app/oracle/product/12.1.0/dbhome_2/bin/sqlplus dba/dba2025#@POSUAT2 @/orabackup/scripts/DTS_UAT_DB_POS_monitor.sql >/orabackup/scripts/logs/DTS_UAT_POSUAT_monitor-${fdate}-execution.log
exit;



Sunday, 12 January 2025

Script to find temp tablespace free and used space

tempfree.sql

SELECT   A.tablespace_name tablespace, D.mb_total,
         SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used,
         D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free
FROM     v$sort_segment A,
         (
         SELECT   B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total
         FROM     v$tablespace B, v$tempfile C
         WHERE    B.ts#= C.ts#
         GROUP BY B.name, C.block_size
         ) D
WHERE    A.tablespace_name = D.name
GROUP by A.tablespace_name, D.mb_total;


tempspace.sql

-- GE Restricted
select v.sid, v.serial#, v.status, v.username, s.tablespace, s.segtype
from v$session v, v$sort_usage s
where v.saddr=s.session_addr
/


Script Name : temp_top_sql.sql
description : Run below script to monitor the temp tablespace usage and check what running sqltext.



set linesize 400
column osuser format a20
column sid format 9999
column serial# format 99999
column username format a20
column program format a20
column vp.value format 99999999
        select s.osuser, s.sid, s.serial#,s.username, s.program,
               sum(u.blocks)*vp.value/1024/1024 sort_size_MB
        from   sys.v_$session s, sys.v_$sort_usage u, sys.v_$parameter vp
        where  s.saddr = u.session_addr
          and  vp.name = 'db_block_size'
            group  by s.osuser, s.sid, s.serial#,s.username, s.program, vp.value;
SELECT s.sid, s.status, sq.address,  sq.sql_id, sq.sql_text
FROM v$session s, v$sql sq
WHERE s.status = 'ACTIVE'
and s.sql_hash_value = sq.hash_value
and s.sql_address = sq.address
and s.sid IN (
select s.sid
     from   sys.v_$session s, sys.v_$sort_usage u, sys.v_$parameter vp
where  s.saddr = u.session_addr
            and  vp.name = 'db_block_size'
         group  by s.osuser, s.sid, s.serial#,s.username, s.program, vp.value);

Script to check all tablespace details in the database

set linesize 300
set pagesize 300

select a.tablespace_name, b.total,
       c.available, d.large_extent,
       (b.total - c.available) used,
       round((((b.total - c.available)*100)/b.total),2) pctused
  from (select tablespace_name, (sum(bytes)/1024)/1024 total
          from dba_data_files group by tablespace_name)  b,
       (select tablespace_name,  nvl(round(((sum(bytes)/1024)/1024),2),0) available
          from dba_free_space group by tablespace_name)  c,
       (select tablespace_name, (max(bytes)/1024)/1024 large_extent
          from dba_free_space group by tablespace_name)  d,
       dba_tablespaces  a
 where a.tablespace_name = b.tablespace_name(+)
   and a.tablespace_name = c.tablespace_name(+)
   and a.tablespace_name = d.tablespace_name(+)
order by pctused
/

Script to find top10active jobs in database.

 /* top10active.sql
    shows the top 10 longest-active user sessions
*/
col osuser format a10 trunc
col LastCallET     format 99,999
col username format a10 trunc
col uprogram format a25 trunc
col machine format a10 trunc
set linesize 132 pages 20
set verify off
accept trgtuser char default ALL prompt 'Limit to what userid <ALL> : '
select * from (
select to_char(s.logon_time, 'mm/dd hh:mi:ssAM') loggedon,
  s.sid, s.status,
  floor(last_call_et/60) "LastCallET",
 s.username, s.osuser, 
 p.spid, s.module || ' - ' || s.program uprogram, 
s.machine, s.sql_hash_value
from v$session s, v$process p
where p.addr = s.paddr
  and s.type = 'USER'
  and s.username is not null
  and s.status = 'ACTIVE'
  and (s.username = upper('&trgtuser') or upper('&trgtuser') = 'ALL')
order by 4 desc)
where rownum < 31;

Script to check tablespace details.

 

column file_name format a55
set linesize 132
set pages 200
column tablespace_name format a11
set verify off
accept tbs prompt 'What is the Tablespace Name: '


SELECT  dts.tablespace_name, 
NVL(ddf.bytes / 1024 / 1024, 0) avail,
NVL(ddf.bytes - NVL(dfs.bytes, 0), 0)/1024/1024 used,
NVL(dfs.bytes / 1024 / 1024, 0) free,
TO_CHAR(NVL((ddf.bytes - NVL(dfs.bytes, 0)) / ddf.bytes * 100, 0), '990.00')
"Used %" ,
TO_CHAR(NVL((ddf.bytes - NVL(ddf.bytes - NVL(dfs.bytes, 0), 0)) / ddf.bytes
* 100, 0), '990.00') free_pct,
decode(sign(
(NVL(ddf.bytes - NVL(dfs.bytes, 0), 0)/1024/1024)/0.85 - NVL(ddf.bytes / 1024 /
1024, 0)),-1,0,(NVL(ddf.bytes - NVL(dfs.bytes, 0), 0)/1024/1024)/0.85 - NVL(ddf.bytes / 1024 / 1024, 0))  "Required MB"
FROM 
sys.dba_tablespaces dts, 
(select tablespace_name, sum(bytes) bytes 
from dba_data_files group by tablespace_name) ddf, 
(select tablespace_name, sum(bytes) bytes 
from dba_free_space group by tablespace_name) dfs 
WHERE 
dts.tablespace_name = ddf.tablespace_name(+) 
AND dts.tablespace_name = dfs.tablespace_name(+) 
and dts.tablespace_name = '&tbs';

select file_name,bytes/1024/1024 from dba_data_files
where tablespace_name = '&tbs';

To find information about locks in the database

To check locks in the database:

col schema format a8
col object format a20
select substr(a.os_user_name,1,9)    "OS User" 
,      substr(a.oracle_username,1,8) "DB User" 
,      substr(b.owner,1,8)           "Schema" 
,      e.sid      "Sid"
,      substr(b.object_name,1,20)    "Object" 
,      substr(b.object_type,1,10)    "Type" 
,      substr(c.segment_name,1,5)    "RBS" 
,      substr(d.used_urec,1,12)      "# of Records" 
from v$locked_object a 
,    dba_objects b 
,    dba_rollback_segs c 
,    v$transaction d 
,    v$session e 
where a.object_id =  b.object_id 
  and a.xidusn    =  c.segment_id 
  and a.xidusn    =  d.xidusn 
  and a.xidslot   =  d.xidslot 
  and d.addr      =  e.taddr;

Locksid
====
select substr(a.os_user_name,1,9)    "OS User"
,      substr(a.oracle_username,1,8) "DB User"
,      substr(b.owner,1,8)           "Schema"
,      substr(b.object_name,1,20)    "Object"
,      substr(b.object_type,1,10)    "Type"
,      substr(c.segment_name,1,5)    "RBS"
,      substr(d.used_urec,1,12)      "# of Records"
, e.sid
from v$locked_object a
,    dba_objects b
,    dba_rollback_segs c
,    v$transaction d
,    v$session e
where a.object_id =  b.object_id
  and a.xidusn    =  c.segment_id
  and a.xidusn    =  d.xidusn
  and a.xidslot   =  d.xidslot
  and d.addr      =  e.taddr
/


comment : usefull to check sql for sid or locksql
select sql_text from v$sqltext where hash_value=( select prev_hash_value from v$session where sid='&sid');

Check the size of different pools in database using SQL command and database parameter file.

set linesize 500
set pages 200
col name for a35
col value for a35
select name, value from v$parameter2 where name in ('sga_target','java_pool_size','large_pool_size','shadow_core_dump','db_cache_size','sga_max_size','sga_target','shared_pool_reserved_size','shared_pool_size');


Checking at OS Level.

egrep -i 'shared_pool_size|shared_pool_reserved_size|db_cache_size|sga_max_size|sga_target' initTEST.ora

*.db_cache_size=2G
*.sga_max_size=0
*.sga_target=0
*.shared_pool_reserved_size=320M
*.shared_pool_size=3500M
*pga_aggregate

Solaris tar and untar commands

 ++++++++++++++++++++++++

solaris tar commands

++++++++++++++++++++++

nohup tar -cvf - appl |gzip -c > /backup/rman_1060/bhaogpp1/bin_25feb13/appl.tar.gz &

nohup tar -cvf - 10.1.2 |gzip -c > /backup/rman_1060/bhaogpp1/bin_25feb13/1012.tar.gz &

 nohup tar -cvf - comn |gzip -c > /backup/rman_1060/bhaogpp1/bin_25feb13/comn.tar.gz &

 nohup tar -cvf - 10.1.3 |gzip -c > /backup/rman_1060/bhaogpp1/bin_25feb13/1013.tar.gz &


+++++++++++++++++++++++++++

solaris untar commands

+++++++++++++++++++++++++

 nohup gunzip -c /backup/rman_0008/bhaerp99/BIN_18oct12/comn.tar.gz | tar -xvf - &

 nohup gunzip -c /backup/rman_0008/bhaerp99/BIN_18oct12/1012.tar.gz | tar -xvf - &

 nohup gunzip -c /backup/rman_0008/bhaerp99/BIN_18oct12/appl.tar.gz | tar -xvf - &

 nohup gunzip -c /backup/rman_0008/bhaerp99/BIN_18oct12/1013.tar.gz | tar -xvf - &

 nohup  gunzip -c /backup/rman_0008/bhaerp99/BIN_18oct12/1120.tar.gz | tar -xvf - &


Change database global name

  Global database name

=========================

alter database rename global_name to abcgapn1.bh.com;

Linux tar and untar command

------------------

Linux Tar Command:

-------------------

cd /u01/oracle/

nohup tar -czvf /bhapgag1/staging/bin_gpap1_01Sep2013/PGAP1_1120.tar.gz 11.2.0 > /bhapgag1/staging/bin_gpap1_01Oct2013/1120.log &

nohup tar -czvf /bhapgag1/staging/bin_gpap1_01Oct2013/PGAP1_appl.tar.gz appl > /bhapgag1/staging/bin_gpap1_01Oct2013/appl.log &

nohup tar -czvf /bhapgag1/staging/bin_gpap1_01Oct2013/GAP1_comn.tar.gz comn > /bhapgag1/staging/bin_gpap1_01Oct2013/comn.log &

nohup tar -czvf /bhapgag1/staging/bin_gpap1_01Oct2013/PGAP1_1012.tar.gz 10.1.2 > /bhapgag1/staging/bin_gpap1_01Oct2013/10.1.2.log &

nohup tar -czvf /bhapgag1/staging/bin_gpap1_01Oct2013/PGAP1_1013.tar.gz 10.1.3 > /bhapgag1/staging/bin_gpap1_01Oct2013/10.1.3.log &


Linux Untar Command:

---------------------


cd /u01

nohup tar -xzvf /bhapgag1/staging/bin_gpap1_01Sep2013/PGAP1_1120.tar.gz > /bhapgag1/staging/bin_gpap1_01Oct2013/1120_untar.log &

nohup tar -xzvf /bhapgag1/staging/bin_gpap1_01Oct2013/PGAP1_appl.tar.gz > /bhapgag1/staging/bin_gpap1_01Oct2013/appl_untar.log &

nohup tar -xzvf /bhapgag1/staging/bin_gpap1_01Oct2013/PGAP1_comn.tar.gz > /bhapgag1/staging/bin_gpap1_01Oct2013/comn_untar.log &

nohup tar -xzvf /bhapgag1/staging/bin_gpap1_01Oct2013/PGAP1_1012.tar.gz > /bhapgag1/staging/bin_gpap1_01Oct2013/10.1.2_untar.log &

nohup tar -xzvf /bhapgag1/staging/bin_gpap1_01Oct2013/PGAP1_1013.tar.gz > /bhapgag1/staging/bin_gpap1_01Oct2013/10.1.3_untar.log & 

setup ssh

 setup ssh

++++++++++

on Node1:

---------

mkdir $HOME/.ssh


ssh-keygen -t rsa


copy content of id_rsa.pub to authorized_keys on other (node2) server



on Node2:

---------

and do the same on node2



mkdir $HOME/.ssh


ssh-keygen -t rsa


copy content of id_rsa.pub to authorized_keys on other (node1) server 


How to to find out toad session

 ==========================

To find out toad session

==========================

select p.spid,s.sid,s.serial#,s.username,s.status,s.last_call_et,p.program,p.terminal,logon_time,module,s.osuser

from V$process p,V$session s where s.paddr = p.addr and s.status = 'ACTIVE' and s.module like '%TOAD%';

How to get encrypted password of a user.

 ====================================

To get encrypted password of a user.

===================================

select 'alter user "'||username||'" identified by values '''||extract(xmltype(dbms_metadata.get_xml('USER',username)),'//USER_T/PASSWORD/text()').getStringVal()||''';'  old_password from  dba_users where username = 'GLOGOWNER';


How to remove archives for a specific date?

 ++++++++++++++++

How to remove archives for a specific date?

++++++++++++++++


$cd /u01/archive

$rm `ls -l|grep "Apr 13" |awk '{print $9}'`