Wednesday, October 23, 2019

Memory usage for AMM RAC database and changing RAC database to ASMM

SQL> select sum(bytes/1024/1024) Current_SGA_SIZE_in_MB from v$sgastat;
CURRENT_SGA_SIZE_IN_MB
----------------------
        1804.448437
SQL> select sum(bytes/1024/1024) MAX_SGA_SIZE_in_MB from  v$sgainfo    where name = 'Maximum SGA Size';
MAX_SGA_SIZE_IN_MB
------------------
    2592.84766
SQL> show parameter memory_max_target;
NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
memory_max_target             big integer 1600M
SQL> select (value/1024/1024) Current_PGA_IN_USE_in_MB from v$pgastat where name = 'total PGA inuse';
CURRENT_PGA_IN_USE_IN_MB
------------------------
          788.078938
SQL> select (value/1024/1024) MAX_PGA_ALLOCATED_in_MB from v$pgastat where name = 'maximum PGA allocated';
MAX_PGA_ALLOCATED_IN_MB
-----------------------
         1567.658203
SQL> select (value/1024/1024) PGA_TARGET_in_MB    from v$pgastat where name = 'aggregate PGA target parameter';
PGA_TARGET_IN_MB
----------------
         480





Memory usage of exisitng AMM: 
 - memory reserved  for PGA/SGA: 8 GB 
 - current PGA size 790 MB
 - current SGA size 1804 MB
 - free memory for future PGA/SGA usage: ~ 2.5 GB

For switching ASMM this can be translated into 
  SGA_MAX_SIZE             : 3 GB
  SGA_TARGET               : 3 GB
  PGA_AGGREGATE_TARGET     :  2 GB

For further tuning check : V$PGA_TARGET_ADVICE

Execute the  following commands.
Disable AMM
  SQL> alter system reset memory_max_target scope=spfile  sid='*';
  SQL> alter system reset memory_target  scope=spfile  sid='*';

Enable ASMM
  SQL> alter system set SGA_MAX_SIZE=3G scope=spfile  sid='*';
  SQL> alter system set SGA_TARGET=3G scope=spfile  sid='*'; 
  SQL> alter system set PGA_AGGREGATE_TARGET=2G scope=spfile  sid='*';  

Reboot database and verify that we have switched from AMM to ASMM
SQL> show parameter memory
NAME                     TYPE     VALUE
------------------------------------ ----------- ------------------------------
memory_max_target             big integer 0
memory_target                 big integer 0
--> AMM disabled 

SQL> show parameter sga
NAME                     TYPE          VALUE
------------------------ ----------- ------------------------------
sga_max_size             big integer 3G
sga_target               big integer 3G

SQL> show parameter pga
NAME                     TYPE     VALUE
------------------------ ----------- ------------------------------
pga_aggregate_target     big integer 2G

--> ASMM enabled !

Friday, September 6, 2019

OPatch, Mainting patches in Oracle Database

OPatch is an Oracle utility that assists us to apply interim patches to Oracle’s rdbms software amd clusterware. We can find opatch utility in $ORACLE_HOME/Opatch directory. 

For example, let we apply patch number 11114547  to our 11.2.0.4 database.



Applying Patch:

1- Backup Oracle Home directory.

       $ tar -cf ora11g.tar ora11g

2- Download patch file p11114547_10205_Linux-x86-64.zip via metalink. And copy it to database server.

3- Unzip the patch file.

       $ unzip p11114547_10205_Linux-x86-64.zip

4- Apply patch with opatch utility.

       $ cd 11114547

       $ ORACLE_HOME/OPatch/opatch apply


To see list of applied patches :

        $ORACLE_HOME/OPatch/opatch lsinventory



For example:

        $ORACLE_HOME/OPatch/opatch lsinventory

   Invoking OPatch 11.2.0.4.9

Oracle Interim Patch Installer version 11.2.0.4.9

Copyright (c) 2011, Oracle Corporation. All rights reserved.

Oracle Home : /oracle/ora10g

Central Inventory : /oracle/oraInventory

from : /etc/oraInst.loc
OPatch version : 11.2.0.4.9

OUI version : 11.2.0.5.0

OUI location : /oracle/ora10g/oui

Log file location : /oracle/ora10g/cfgtoollogs/opatch/opatch2017-09-14_12-13-12PM.log

Patch history file: /oracle/ora10g/cfgtoollogs/opatch/opatch_history.txt
Lsinventory Output file location : /oracle/ora10g/cfgtoollogs/opatch/lsinv/lsinventory2017-09-14_12-13-12PM.log



Installed Top-level Products (3):

Oracle Database 10g 10.2.0.1.0

Oracle Database 10g Release 2 Patch Set 3 10.2.0.4.0

Oracle Database 10g Release 2 Patch Set 4 10.2.0.5.0

There are 3 products installed in this Oracle Home.

Interim patches (2) :

Patch 8943287 : applied on Fri Oct 21 20:39:46 EEST 2011

Unique Patch ID: 12722995

Created on 23 Aug 2010, 11:45:16 hrs PST8PDT

Bugs fixed:

8943287



Rollback applied patch:

Sometimes a patch is applied to the system may need to take back because of its effect. In this case, rollback is performed as follows.

$ORACLE_HOME/OPatch/opatch rollback -id 11114547

Thursday, September 5, 2019

Gather diagnostic Information using TFA in Oracle Database

Oracle Trace File Analyzer (TFA) provides a number of diagnostic tools in a single bundle, making it easy to gather diagnostic information about the Oracle database and clusterware, which in turn helps with problem resolution when dealing with Oracle Support.

If possible we should install Oracle Trace File Analyzer as root. This will give you the highest capabilities. If Oracle Trace File Analyzer is already installed, reinstalling will perform an upgrade to the existing location. If it is not already installed, the recommended location is /opt/oracle.tfa

To install as root:

Download the appropriate Oracle Trace File Analyzer zip, copy to required machine and unzip.
Run the installTFA command: $ ./installTFA

To install as an ORACLE_HOME owner use the –extractto option. This tells Oracle Trace File Analyzer where to install to. The installer includes a JVM, but if you want to use one already installed use the –javahome option to point to it.

./installTFA -extractto -javahome

If we do not want to use ssh, we can install on each host using a local install. Then we use tfactl syncnodes to generate and deploy the relevant SSL certificates.

Now the time to collect TFA.

Please run TFA which collects all clusterware logs from all nodes (needs to be done as root from node1's gi_home): 

TFA from each node: 
==> TFA Collector- The Preferred Tool for Automatic or ADHOC Diagnostic Gathering Across All Cluster Nodes ( Doc ID 1513912.1 ) 

Examples: 
/bin/tfactl diagcollect -all -from "" -to "
/bin/tfactl diagcollect -all -since [2d|8h] 
/bin/tfactl diagcollect -for "" <--- 12hrs="" after="" and="" before="" collect="" font="" given="" nbsp="" the="" time="" will="">




Wednesday, September 4, 2019

Generate Incident Report in Oracle Database


The Automatic Diagnostics Repository (ADR) is a hierarchical file-based repository for handling diagnostic information. 

Directory structure is as:

$ADR_BASE/diag/rdbms/{DB-name}/{SID}/alert
$ADR_BASE/diag/rdbms/{DB-name}/{SID}/cdump
$ADR_BASE/diag/rdbms/{DB-name}/{SID}/hm
$ADR_BASE/diag/rdbms/{DB-name}/{SID}/incident
$ADR_BASE/diag/rdbms/{DB-name}/{SID}/trace
$ADR_BASE/diag/rdbms/{DB-name}/{SID}/{others}

To generate incident report quickly, we can follow the below steps:

adrci> show problem
adrci> show incident

adrci> show incident -mode detail -p "incident_id=incident_no" 
adrci> ips create package problem <problem_id>correlate all
adrci> ips generate package  in "/tmp"



Tuesday, September 3, 2019

Set up password less authentication in ssh

To setup SSH passwordless authentication(User Equivalence) Between 2 Servers Linux, we can follow the below steps.

Lets assume we have only 2 servers : ServerA, ServerB and we need to configure password-less login between both the servers.

We need to login to ServerA and identify the .ssh directory exists under the home directory

In case it doesn't exist we can create it.

cd /root

mkdir .ssh

chmod 700 .ssh

On Linux for root user it is normally /root/.ssh

$ cd /root/.ssh

$ ssh-keygen -t rsa

This will create 2 files id_rsa and id_rsa.pub. One is a private key file and other is the public key file.

$ cat id_rsa.pub >> auth_keys_a

Now copy this file to the ServerB using scp utility

$ scp auth_keys_a root@ServerB:/root/.ssh

Now on ServerB identify the .ssh directory which should ideally be on the same location /root/.ssh
$ cd /root/.ssh

$ ssh-keygen - rsa

This will again create the public and private keyfile

$ cat id_rsa.pub >> auth_keys_b

Now scope this file back to ServerA

$ scp auth_keys_b root@ServerA:/root/.ssh

Now we can try to ssh between the nodes

From ServerA

ssh root@serverB


From ServerB

ssh root@serverA