Wednesday, December 27, 2017

Problem to mount NFS : mount: wrong fs type, bad option, bad superblock

Sometime, we may get error as  mount: wrong fs type, bad option, bad superblock while trying to mount NFS file in ubuntu. For my case, i faced the problem showing the error as:


The reason behind it that for several filesystems (e.g. nfs, cifs) we might need a /sbin/mount. helper program.
As we are trying to mount NFS. The /sbin/mount.nfs helper program is provided by nfs-common. We can install it with:

sudo apt-get update
sudo apt install nfs-common

On the other hand, if we are trying to mount CIFS, the helper program is provided by cifs-utils. We can install it using below command:

sudo apt install cifs-utils


Then we can mount our target as:
We will have to create a directory where to mount the nfs,

sudo mount :/ /

Thursday, December 21, 2017

Get elapsed time for an individual command

In sqlplus, to see the elapsed time for an individual query, we can use the "set timing on" command.

SQL> set timing on;
SQL> select surname from personal_details where firstname='udvas';

Elapsed: 00:00:02.52

Again, the "set timing on" command is a SQL*Plus command, but we can measure run time for Oracle SQL with a variety of Oracle tools.

Sometimes when working on SQL command optimizations, all that is desired is a rough timing estimate; namely, the SQL*Plus client elapsed execution time, or simple clock time. Often that simple metric is sufficient for some very basic tuning needs. SQL*Plus has a built-in capability to do exactly this - it is the SET TIMING  command. It essentially records the clock time before and after the SQL command execution, then displays the run time difference.

This commands works for the single command.

Thursday, December 14, 2017

VisualVM for monitoring remote jvm

Using VisualVM we can monitor all the running Java Process and status of JDK behind these processes. VisualVM can be found inside the JDK bin directory.



By running VisualVM, by default we can access the locally run processes. To get the Remote Server processes, we have to connect with that server through credential. And in the target server, jstatd must be in running.

jstatd: 

Monitors Java Virtual Machines (JVMs) and enables remote monitoring tools to attach to JVMs. This command is experimental and unsupported. 
The jstatd command is an RMI server application that monitors for the creation and termination of instrumented Java HotSpot VMs and provides an interface to enable remote monitoring tools to attach to JVMs that are running on the local host.


To run jstatd, java security permission is required. For this purpose, at the starting of jstatd, argument will have to be passed in the command line as:

 jstatd -J-Djava.security.policy=/home/paul/jstatd.all.policy



content of jstatd.all.policy  is as:

bash-4.3$ cat /home/oracle/jstatd.all.policy

grant codebase "file:${java.home}/../lib/tools.jar" {
   permission java.security.AllPermission;
};


It permits only the min required security related permission .




Monday, December 4, 2017

SVN Quick Note : Check all revision and download specific revision


We can use the below command for the revision history with commit message

svn log -l5 -v



To dowload specific revision version from SVN:
svn checkout http://10.78.11.131/svn/mbsdb@7 testsvn/

* Here 7 is the revision no.
** testsvn is the target directory where to download

Sunday, October 29, 2017

ORA-02297: cannot disable constraint -dependencies exist

ORA-02297: cannot disable constraint -dependencies exist

Whenever you try to disable a constraint of a table it fails with error message ORA-02297: cannot disable constraint -dependencies exist as below.

SQL> alter table transaction disable constraint TRANSACTION_PK;
alter table transaction disable constraint TRANSACTION_PK
*
ERROR at line 1:
ORA-02297: cannot disable constraint (OMS.TRANSACTION_PK) - dependencies exist

Reason behind this problem is as:
Disable constraint command fails as the table is parent table and it has foreign key that are dependent on this constraint.

This problem can be solved in different ways.

Two solutions exist for this problem.
1)Find foreign key constraints on the table and disable those foreign key constraints and then disable this table constraint.

Following query will check dependent table and the dependent constraint name. After that disable child first and then parent constraint.

SQL> SELECT p.table_name "Parent Table", c.table_name "Child Table",
     p.constraint_name "Parent Constraint", c.constraint_name "Child Constraint"
     FROM user_constraints p
     JOIN user_constraints c ON(p.constraint_name=c.r_constraint_name)
     WHERE (p.constraint_type = 'P' OR p.constraint_type = 'U')
     AND c.constraint_type = 'R'
     AND p.table_name = UPPER('&table_name');
Enter value for table_name: transaction
old   7:      AND p.table_name = UPPER('&table_name')
new   7:      AND p.table_name = UPPER('transaction')

Parent Table                   Child Table                    Parent Constraint              Child Constraint
------------------------------ ------------------------------ ------------------------------ ------------------------------
TRANSACTION                    USER_SALARY_RECORD             TRANSACTION_PK                 SYS_C005564
TRANSACTION                    TRANSACTION_DETAIL             TRANSACTION_PK                 TRNSCTN_DTL_TRNSCTN_FK

SQL> alter table USER_SALARY_RECORD disable constraint SYS_C005564;
Table altered.

SQL> alter table TRANSACTION_DETAIL  disable constraint TRNSCTN_DTL_TRNSCTN_FK;
Table altered.

SQL> alter table transaction disable constraint TRANSACTION_PK;
Table altered.

2)Disable the constraint with cascade option.

SQL> alter table transaction disable constraint TRANSACTION_PK cascade;
Table altered.

Thursday, October 12, 2017

Searching text in the file contents inside a directory in linux

Sometimes, We need to find out specific string of text in the files' Contents of a directory of linux system. Its to look for text within the file, no tin the file name. 


In such case we can user the below command:


grep -r -l "specific test" /directory/path/full/*criteria_if_required_in_file_name*log

Tuesday, October 10, 2017

weblogic.net.http.SOAPHttpsURLConnection cannot be cast to javax.net.ssl.HttpsURLConnection

In weblogic environment, a error "weblogic.net.http.SOAPHttpsURLConnection cannot be cast to javax.net.ssl.HttpsURLConnection" may come while trying to connect to a url using javax.net.ssl.HttpsURLConnection.

For my case, modification of the config.xml file inside the cofig directroy inside the server domain directory solved the problem.


<server-start>
<arguments>-DUseSunHttpHandler=true</arguments>
</server-start>

This will tell the weblogic server to use the Sun Http Handlers and not install its own at the time of starting weblogic server.

Thursday, July 6, 2017

NGINX: upstream timed out (110: Connection timed out) while reading response header from upstream

For my case, I got around this issue by clearing the connection keep-alive flag and specifying http version as per the answer here: 

https://stackoverflow.com/a/36589120/479632

server {
    location / {
        proxy_set_header   X-Real-IP $remote_addr;
        proxy_set_header   Host      $http_host;

        # these two lines here
        proxy_http_version 1.1;
        proxy_set_header Connection "";

        proxy_pass http://localhost:5000;
    }
}

Monday, June 12, 2017

Oracle Bug: kewastUnPackStats(): bad magic 1 (0x2b2b8f4652ad, 0)

Someitmes we may get the error in rdbms alert log...?

It generates thousands of lines each day which will expand the alert log size.

We can override this error by.

ALTER SYSTEM SET control_management_pack_access='NONE' SCOPE=MEMORY;

To get the solution permanantly, we need to deploy the patch.

Thursday, June 1, 2017

Starting a Rac Instance from the spfile of another instance for Oracle Database

At a Glance:
  1. Create pfile.
  2. Modify pfile with the correct location
  3. Start instance with the new pfile.


Step 1: Create pfile from spfile:

create pfile='/tmp/pfile_n1_01062017' from spfile;

Step 2: Transfer the file to target node:

scp /tmp/pfile_n1_01062017 oracle@sdbabut2:/tmp/pfile_n1_01062017.ora

Step 3: Modify the file according to target node:
Fields: thread, instance_number,dispatchers
Replace source node service name by target host service name

:%s/source_host_service_name/target_host_service_name/g




Step 4: Start instance from modified pfile:

sqlplus / as sysdba
> startup mount pfile=’/tmp/modifiedpfile.ora’

If it shows error, in that case modify accordingly.

If case of successfully start up,  need to alter the mode of the database.

sqlplus> alter database open;

If everything seems fine, create the spfile from the memory.
sqlplus> create spfile from memory;


Sunday, May 28, 2017

To remove a single line from history

If we have already run a command, and want to remove it from history, firstly we will have to use
history
to display the list of commands of history. Then we will have to find the number next to the one we want to delete (e.g. 4321) and run
history -d 4321

Tuesday, May 23, 2017

Installing python cx_oracle on Ubuntu


cs_oracle is the python interface to oracle. To install this to your ubuntu environment, we can follow the below steps:


Step 1:
       sudo apt-get install build-essential unzip python-dev libaio-dev

Step 2:
Click here to download the appropriate zip files required for this. You'll need:

instantclient-basic-linux
instantclient-sdk-linux

Get the appropriate version for your system.. x86 vs 64 etc. Make sure you don't get version 12, since it's not supported by the cx_Oracle moduel yet.

Unzip the content in the same location, so you'll end up with a folder named: instantclient_11_2 which will contain a bunch of .so and jar files.

For my case I used,  $ORACLE_INST_CLIENT, which will basically point to the location where you unzipped your installclient folders.

export ORACLE_INST_CLIENT=$(pwd)/instantclient_11_2

Step 3:
create a symlink to your SO file.

cd $ORACLE_HOME
ln -s libclntsh.so.11.1   libclntsh.so  #the version number on your .so file might be different

Step 4: 
Update your /etc/profile or your ~/.bashrc

export ORACLE_INST_CLIENT=/location/of/your/files/instantclient_11_2
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_INST_CLIENT

Step 5: 
Edit /etc/ld.so.conf.d/oracle.conf

This is a new file, simple add the location of your .so files here, then update the ldpath using

sudo ldconfig

Step 6:
Finally just install cx_oracle module:

pip install cx_oracle



Knowledge Bank:♔
Mastering oracle python
Insert Crud using cx_oracle





Monday, May 22, 2017

Session Trace to find the reason in Oracle DB

Sometimes the same query that ran earlier with less time may take much more time now. To find out the reason, we can follow the below steps.

To find out what is happening you need to do a session trace of a session running this SQL.

  • Open up SQL*Plus and connect to your database.
  • Enable timed statistics if it not already enabled.


alter session set time_statistics=true;


  • Turn on a level 8 SQL Trace of your session. A level 8 trace will capture your wait events. The alter session set events statement below is what enables the SQL Trace of your session.


alter session set tracefile_identifier='Target_Trace';
alter session set events '10046 trace name context forever, level 8';


  • Run your Select and then exit SQL*Plus.



  • On your database server, go to the $UDUMP directory and find your trace file. It will have 'EMP_Select' as part of the name.

  • Using your trace file as input use the tkprof utility to format your trace file into a more readable format. If you invoke tkprof with no arguments, it will give you a help screen with the valid options to use. Make sure you specify the EXPLAIN_PLAN and WAITS=YES options.


Using the output of the TKProf utility you should be able to identify the problem. You would need to especially review the Wait events and Explain plan output.

Monday, May 1, 2017

Tips for Writing Your Research Proposal


1. Know yourself: Know your area of expertise, what are your strengths and what are your weaknesses. Play to your strengths, not to your weaknesses. If you want to get into a new area of research, learn something about the area before you write a proposal. Research previous work. Be a scholar.
.
2. Know the program from which you seek support: You are responsible for finding the appropriate program for support of your research. 
.
3. Read the program announcement: Programs and special activities have specific goals and specific requirements. If you don’t meet those goals and requirements, you have thrown out your chance of success. Read the announcement for what it says, not for what you want it to say. If your research does not fit easily within the scope of the topic areas outlined, your chance of success is nil.
.
4. Formulate an appropriate research objective: A research proposal is a proposal to conduct research, not to conduct development or design or some other activity. Research is a methodical process of building upon previous knowledge to derive or discover new knowledge, that is, something that isn’t known before the research is conducted. 
.
5. Develop a viable research plan: A viable research plan is a plan to accomplish your research objective that has a non-zero probability of success. The focus of the plan must be to accomplish the research objective.
.
6. State your research objective clearly in your proposal: A good research proposal includes a clear statement of the research objective. Early in the proposal is better than later in the proposal. The first sentence of the proposal is a good place. A good first sentence might be, “The research objective of this proposal is...” Do not use the word “develop” in the statement of your research objective. 
.
7. Frame your project around the work of others: Remember that research builds on the extant knowledge base, that is, upon the work of others. Be sure to frame your project appropriately, acknowledging the current limits of knowledge and making clear your contribution to the extension of these limits. Be sure that you include references to the extant work of others. 
.
8. Grammar and spelling count: Proposals are not graded on grammar. But if the grammar is not perfect, the result is ambiguities left to the reviewer to resolve. Ambiguities make the proposal difficult to read and often impossible to understand, and often result in low ratings. Be sure your grammar is perfect. 
.
9. Format and brevity are important: Do not feel that your proposal is rated based on its weight. Use 12-point fonts, use easily legible fonts, and use generous margins. Take pity on the reviewers. Make your proposal a pleasant reading experience that puts important concepts up front and makes them clear. Use figures appropriately to make and clarify points, but not as filler. 
.
10. Know the review process: Know how your proposal will be reviewed before you write it. Proposals that are reviewed by panels must be written to a broader audience than proposals that will be reviewed by mail. Mail review can seek out reviewers with very specific expertise in very narrow disciplines. 
.
11. Proof read your proposal before it is sent: Many proposals are sent out with idiotic mistakes, omissions, and errors of all sorts. Proposals have been submitted with the list of references omitted and with the references not referred to. Proposals have been submitted to the wrong program. Proposals have been submitted with misspellings in the title. These proposals were not successful. Stupid things like this kill a proposal. It is easy to catch them with a simple, but careful, proof reading. Don’t spend six or eight weeks writing a proposal just to kill it with stupid mistakes that are easily prevented.
.
12. Submit your proposal on time: Duh? Why work for two months on a proposal just to have it disqualified for being late? Remember, fairness dictates that proposal submission rules must apply to everyone. It is not up to the discretion of the program officer to grant you dispensation on deadlines. Get your proposal in two or three days before the deadline.



Thursday, April 20, 2017

DMLTransaction Info in oracle

For each DML operation, a transaction id is initiated. We can get the transaction id by below query:


select dbms_transaction.local_transaction_id from dual;




PAUL @ pauldb-uat > select dbms_transaction.local_transaction_id from dual;

LOCAL_TRANSACTION_ID
--------------------------------------------------------------------------------
8.10.4224


the transaction ID is a series of numbers denoting undo segment number, slot# and record# (also known as sequence#) respectively, separated by periods.


To get the details about a transaction we can use the below query:

select
    owner               object_owner,
    object_name         object_name,
    session_id          oracle_sid,
    oracle_username     db_user,
    decode(LOCKED_MODE,
        0, 'None',
        1, 'Null',
        2, 'Row Share',
        3, 'Row Exclusive',
        4, 'Share',
        5, 'Sub Share Exclusive',
        6, 'Exclusive',
        locked_mode
    )                   locked_mode
    from v$locked_object lo,
        dba_objects do
    where
        (xidusn||'.'||xidslot||'.'||xidsqn)
            = ('&transid')
    and
        do.object_id = lo.object_id;

Wednesday, April 19, 2017

Clearing ITL Slots in Oracle

For the DML operation, transaction keeps entry in the blocks containing the target rows. These entries are maintained by ITL(Interested Transaction List). Now our interest is in the question when this ITL entries will be cleared.

To answer that question, we can consider this scenario: a transaction updates 10000 records, on 10000 different blocks. Naturally there will be 10000 ITL slots, one on each block, all pointing to the same transaction ID. The transaction commits; and the locks are released. Should Oracle revisit each block and remove the ITL entry corresponding to the transaction as a part of the commit operation?

If that were the processing logic, the commit would have taken a very long time. Acquiring the buffers of the 10000 blocks and updating the ITL entry will not be quick; it will take a very long time, prolonging the commit processing. Target of the Oracle design  that the commit processing is actually very quick, with a flush of the log buffer to redo logs and the writing of the commit marker in the redo stream. Even a checkpoint to the datafiles is not done as a part of commit processing – all the effort going towards making the process fast, very fast. Had Oracle added the logic of altering ITL slots, the commit processing would have been potentially long, very long. Therefore Oracle does not remove the ITL entries after that transaction ends (by committing, or rolling back); the slots are just left behind as artifacts.

So, when does the ITL entry gets cleared? When block’s buffer is written to the disk, the unneeded ITL entries are checked and cleared out.

How ITL Slots are maintained in Block

When a transaction modifies rows, then the transaction locks the rows (since it did not commit) by placing a special type of data in the block header known as Interested Transaction List (ITL) entry. The ITL entry shows the transaction ID and other information.

Now we assume, there are 5 records in the block and a transaction updated (and therefore locked) all five of them, how many ITL entries will be used – one or five?

We think five ITL slots may be feasible; but what if the block has 10,000 records? Is it possible to have that many ITL slots in the block header? Let’s ponder on that for a second. There will be two big issues with that many ITL slots.

First, each ITL slot, by the way, is 24 bytes long. So, 10000 slots will take up 240,000 bytes or almost 22 KB. A typical Oracle block is 8KB (We know, it could be 2K, 4K or 16K; but suppose it is the default 8K). Of course it can’t accommodate 22KB.

Second, even if the total size of the ITL slots is less than the size of the block, where will be the room to hold data? In addition, there should be some space for the data block overhead; where will that space come from?

Obviously, these are genuine problems that make one ITL slot per row impractical. Therefore Oracle does not create an ITL entry for each locked row. Instead, it creates the ITL entry for each transaction, which may have updated a number of rows. Let me repeat that – each ITL slot in the block header actually refers to a transaction; not the individual rows. That is the reason why you will not find the rowid of the rows locked in the ITL slot. 

There is reference to a transaction ID; but not rowid. When a transaction wants to update a row in the block, it checks the ITL entries. If there is none, it means rows in that block are unlocked. However, if there are some ITL entries, does it mean that some rows in the block are locked? Not necessarily. It simply means that the rows the block were locked earlier; but that lock may or may not be active now. To check if a row is locked, the transaction checks for the lock byte stored along with the row.

The if the presence of an ITL slot does not mean a record in the block is locked, when does the ITL slot get cleared so that it can be reused, or when does that ITL slot disappear? Is there no effect of Commit and RollBack to clear the ITL slot.

Tuesday, April 18, 2017

ITL- Interested Transaction List

Oracle keeps note of which rows are locked by which transaction in an area at the top of each data block known as the 'interested transaction list'. The number of ITL slots in any block in an object is controlled by the INITRANS and MAXTRANS attributes. INITRANS is the number of slots initially created in a block when it is first used, while MAXTRANS places an upper bound on the number of entries allowed. Each transaction which wants to modify a block requires a slot in this 'ITL' list in the block.

If multiple transactions attempt to modify the same block, they can block each other if the following conditions are fulfilled:

- There is no free ITL ("Interested Transaction List") slot available. Oracle records the lock information right in the block and each transactions allocates an ITL entry. 

- Insufficient space in the block left to add a new ITL slot. Since each ITL entry requires a couple of bytes a new one cannot be created if the block doesn't have sufficient free space.

The INITRANS and MAXTRANS settings of a segment control the initial and maximum number of ITL slots per block. The default of INITRANS in recent Oracle releases is 1 resp. 2 for indexes and the default value for MAXTRANS is 255 since the 10g release.

The following example demonstrates the issue. A block is almost full and several transactions attempt to manipulate different rows that all reside in this block.


The ITL in the "enq: TX - allocate ITL entry" indicates error is for "Interested Transaction List", and there are several approaches to fixing this error:
1 - Increasing the value of INITRANS and/or MAXTRANS for the table and indexes.
2 - Move the table to a smaller blocksize.
3 - In some cases, you can remove the enq: TX - allocate ITL entry error for UPDATE/DELETE DML issues by reorganizing the table to increase PCTFREE for the table, thereby leaving less rows per data block.
4 - Reduce the degree of parallel DML on this table

Monday, April 17, 2017

Parallel Execution Wait Events in Oracle

Oracle Parallel Execution can help utilize the power of your hardware and yet remains under-used. 


It is an interesting technology that is particularly suited to data warehousing, in that it allows a single user (or small set of users) to effectively soak up all of the server resources to satisfy a particular query. According to AskTom article by Tom Kyte on parallel query that said that this isn't always what you want - you wouldn't for example want individual OTLP users soaking up all resource for regular transactions or queries - but parallel query is an excellent way to effectively use up all the available CPUs and disk units when you've got a particularly big warehouse query.


Oracle documentation states that these are main wait/idle events because they indicate the normal behavior of a process waiting for another process to do its work:


  • PX Deq: Table Q Normal 
  • PX Deq: Execute Reply 
  • PX Deq Credit: send blkd 



PX Deq: Table Q Normal

Indicates that the slave wait for data to arrive on its input table queue. In a parallel execution environment we have a producer-consumer model. One slave set works on the data ( e.g. read data from disk , do a join ) called the producer slave set and the other slave set waits to get the data so can start the work. The slaves in this slave set are called consumer. The wait event "PX Deq: Table Q Normal" means that the slaves in the consumer slave have to wait for rows (data) from the other slave set so they can start their work. 


PX Deq: Execute Reply 

The QC is expecting a response (acknowledgment) to a control message from the slaves or is expecting to dequeue data from the producer slave set. This means he waits that the slaves finished to execute the SQL statement and that they send the result of the query back to the QC. 


PX Deq Credit: send blkd 

The wait events "PX Deq Credit: need buffer" and "PX Deq Credit: send blkd" are occur when data or messages are exchanged between process that are part of a px query. 

There is an another event: The PX qref latch event can often mean that the Producers are producing data quicker than the Consumers can consume it. On this particular system, very high degrees of parallelism were being used during an overnight batch run so a great deal of messaging was going on. Maybe we could increase parallel_execution_message_size to try to eliminate some of these waits or we might decrease the DOP(Degree of Parallelism).

Friday, April 14, 2017

Sending mail by PL Sql in oracle

A stored procedure, to send mail can be as:

We are using a new type in the SP for containing the CC mail list. For this purpose, we can do as:


CREATE OR REPLACE TYPE mail_ccs AS TABLE OF VARCHAR2;

SP Definition:

CREATE OR REPLACE PROCEDURE sendMail(
    username       IN VARCHAR2,
    password       IN VARCHAR2,
    smtpHost       IN VARCHAR2,
    smtpPort       IN PLS_INTEGER DEFAULT 25,
    mailFrom       IN VARCHAR2,
    rcptTo         IN VARCHAR2,
    ccs            IN mail_ccs,
    messageSubject IN VARCHAR2,
    messageBody    IN VARCHAR2)
IS
  l_conn UTL_SMTP.connection;
  l_ccs              VARCHAR2(2000);
  l_encoded_username VARCHAR2(200);
  l_encoded_password VARCHAR2(200);
BEGIN
  --open connection
  /*
  l_conn := UTL_SMTP.open_connection(smtpHost, smtpPort);
  UTL_SMTP.helo(l_conn, smtpHost);
  */
  --In case of authentication needed
  l_encoded_username := UTL_RAW.cast_to_varchar2(UTL_ENCODE.base64_encode(UTL_RAW.cast_to_raw(username)));
  l_encoded_password := UTL_RAW.cast_to_varchar2(UTL_ENCODE.base64_encode(UTL_RAW.cast_to_raw(password)));
  l_conn             := UTL_SMTP.open_connection(smtpHost, smtpPort);
  UTL_SMTP.ehlo(l_conn, smtpHost);--DO NOT USE HELO
  UTL_SMTP.command(l_conn, 'AUTH', 'LOGIN');
  UTL_SMTP.command(l_conn, l_encoded_username);
  UTL_SMTP.command(l_conn, l_encoded_password);
  --prepare headers
  UTL_SMTP.mail(l_conn, mailFrom);
  UTL_SMTP.rcpt(l_conn, rcptTo);
  /*if we have multiple recipients or CCs, we must call UTL_SMTP.rcpt once for each one
  however, we shall specify that there are CCs in the mail header in order for them to appear as such*/
  IF ccs IS NOT NULL THEN
    FOR i IN ccs.FIRST..ccs.LAST
    LOOP
      UTL_SMTP.rcpt(l_conn, ccs(i));--add recipient
      l_ccs:=l_ccs||ccs(i)||',';    --mark as CC
    END LOOP;
    --now remove the trailing comma at the end of l_ccs
    l_ccs:=SUBSTR(l_ccs,0,LENGTH(l_ccs)-1 );
  END IF;
  --start multi line message
  UTL_SMTP.open_data(l_conn);
  --prepare mail header
  /*DO NOT USE MON instead of MM in the date pattern if you run the script on machines with different locales as it will be misunderstood
  and the mail date will appear as 01/01/1970*/
  UTL_SMTP.write_data(l_conn, 'Date: ' || TO_CHAR(SYSDATE, 'DD-MM-YYYY HH24:MI:SS') || UTL_TCP.crlf);
  UTL_SMTP.write_data(l_conn, 'To: ' || rcptTo || UTL_TCP.crlf);
  UTL_SMTP.write_data(l_conn, 'Cc: ' || l_ccs || UTL_TCP.crlf);
  UTL_SMTP.write_data(l_conn, 'From: ' || mailFrom || UTL_TCP.crlf);
  UTL_SMTP.write_data(l_conn, 'Subject: ' || messageSubject || UTL_TCP.crlf || UTL_TCP.crlf);
  --include the message body
  UTL_SMTP.write_data(l_conn, messageBody || UTL_TCP.crlf || UTL_TCP.crlf);
  --send the email
  UTL_SMTP.close_data(l_conn);
  UTL_SMTP.quit(l_conn);
END; 



To use the above SP, we can execute that by calling below code:

DECLARE
  USERNAME VARCHAR2(200);
  PASSWORD VARCHAR2(200);
  SMTPHOST VARCHAR2(200);
  SMTPPORT BINARY_INTEGER;
  MAILFROM VARCHAR2(200);
  RCPTTO VARCHAR2(200);
  CCS PAUL.MAIL_CCS;
  MESSAGESUBJECT VARCHAR2(200);
  MESSAGEBODY VARCHAR2(200);
BEGIN
  USERNAME := 'paul.pronabananda';
  PASSWORD := 'paulpassword';
  SMTPHOST := '172.20.1.1';
  SMTPPORT := 50;
  MAILFROM := 'paul.pronabananda';
  RCPTTO := 'paul.pronabananda@gmail.com';
  -- Modify the code to initialize the variable
  -- CCS := NULL;
  MESSAGESUBJECT := 'Test';
  MESSAGEBODY := 'Test Body';

  SENDMAIL(
    USERNAME => USERNAME,
    PASSWORD => PASSWORD,
    SMTPHOST => SMTPHOST,
    SMTPPORT => SMTPPORT,
    MAILFROM => MAILFROM,
    RCPTTO => RCPTTO,
    CCS => CCS,
    MESSAGESUBJECT => MESSAGESUBJECT,
    MESSAGEBODY => MESSAGEBODY
  );
--rollback; 

END;



If mail server address is not already added in the acl list. in that case we will have to add that info in the acl:

If ACL not exists, at first we need to create ACL by below code:

BEGIN 
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL ( 
acl => 'utlpkg.xml', 
description => 'created by paul', 
principal => 'PAUL', 
is_grant => TRUE, 
privilege => 'connect', 
start_date => null, 
end_date => null); 
END;

After creation, we are in need to add the host IP and port in that ACL using below:

BEGIN
  DBMS_NETWORK_ACL_ADMIN.assign_acl (
    acl         => 'utlpkg.xml',
    host        => '172.20.1.1', 
    lower_port  => 1,
    upper_port  => 100);   

  COMMIT;
END;

--


Wednesday, April 12, 2017

Retrieving Query with its Bind Variable Value

To increase query performance, using bind variable is an important factor. In some cases we need to execute a SQL Query for multiple tomes and changes exist only in literal value, in such case experts suggest for using bind varibale; and results is significant on perspective of query performance.

For the case of retrieving query from the database,the value of the bind variable is not present in the query text. But we may need those values when we are working on that SQL query specially when we try to tune that query.

Here are two query that help you to find the valu of bind variable. The only thing you needed is the SQL ID:

1. If your query was run within 30 minutes:

SELECT NAME,POSITION,DATATYPE_STRING,VALUE_STRING 
FROM v$sql_bind_capture WHERE sql_id='';


2. If your query was run more than 30 minutes ago. For this you also need a snapshot Id. By default oracle store snapshots of last 7 days:

SELECT NAME,POSITION,DATATYPE_STRING,VALUE_STRING
FROM DBA_HIST_SQLBIND WHERE SQL_ID='' and SNAP_ID='';


Note: v$sql_bind_capture and DBA_HIST_SQLBIND stors only those bind values 
which are given in HWERE clause and they do not store timstamp,lob,clob type value

Disk Consumption with Average Incremental Value

SELECT b.tsname tablespace_name ,
  MAX(b.used_size_mb) cur_used_size_mb ,
  ROUND(AVG(inc_used_size_mb),2)avg_increas_mb
FROM
  (SELECT a.days,
    a.tsname ,
    used_size_mb ,
    used_size_mb - LAG (used_size_mb,1) OVER ( PARTITION BY a.tsname ORDER BY a.tsname,a.days) inc_used_size_mb
  FROM
    (SELECT TO_CHAR(sp.begin_interval_time,'MM-DD-YYYY') days ,
      ts.tsname ,
      MAX(ROUND((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) used_size_mb
    FROM DBA_HIST_TBSPC_SPACE_USAGE tsu ,
      DBA_HIST_TABLESPACE_STAT ts ,
      DBA_HIST_SNAPSHOT sp,
      DBA_TABLESPACES dt
    WHERE tsu.tablespace_id    = ts.ts#
    AND tsu.snap_id            = sp.snap_id
    AND ts.tsname              = dt.tablespace_name
    AND sp.begin_interval_time > sysdate-7
    GROUP BY TO_CHAR(sp.begin_interval_time,'MM-DD-YYYY'),
      ts.tsname
    ORDER BY ts.tsname,
      days
    ) a
  ) b
GROUP BY b.tsname
ORDER BY b.tsname;

Tuesday, April 11, 2017

SQL Plan Management (SPM) in Oracle

Many times we experience performance regressions because of many things i.e a execution plan has been changed.For this type of problem there is an elegant solution  called SQL Plan Management (SPM). 

Execution plan changes occur due to various system changes. For example, you might have (manually or automatically) updated statistics for some objects, or changed a few optimizer-related parameters. A more dramatic change is a database upgrade (say from 10gR2 to 11g). All of these changes have the potential to cause new execution plans to be generated for many of your SQL statements. Most new plans are obviously improvements because they are tailored to the new system environment, but some might be worse leading to performance regressions. 

DBAs have several options for addressing these regressions. However, what most DBAs want is simple: plans should only change when they will result in performance gains. In other words, the optimizer should not pick bad plans, period. 


SQL plan management provides a mechanism for maintaining consistent SQL performance regardless of changes in optimizer version, optimizer statistics, schema changes, system settings and SQL profile creation.


How SPM Works:
When a SQL statement is hard parsed, the cost based optimizer produces several execution plans and selects the one with the lowest cost. If a SQL plan baseline is present, the optimizer compares the plan it just produced with the plans in the SQL plan baseline. If a matching plan is found that is flagged as accepted the plan is used. If the SQL plan baseline doesn't contain an accepted plan matching the one it just created, the optimizer evaluates the accepted plans in the SQL plan baseline and uses the one with the lowest cost. If the execution plan originally produced by the optimizer has a lower cost than those present in the SQL plan baseline, it is added to the baseline as a not-accepted plan, so it is not used until it is verified not to cause a reduction in performance. If a system change affects all existing accepted plans, so they are considered non-reproducible, the optimizer will use the plan with the lowest cost.

Oracle call this a "conservative plan selection strategy", as the optimizer preferentially uses a tried an tested execution plan, even if a new plan looks like it might perform better. Only when the newer plan is proved to perform well will it be accepted for use.


SQL PLAN MANAGER (SPM) has three major components

SQL Plan Baseline Capture - Create SQL Plan Baseline for trusted (accepted) SQL plans.
SQL Plan Baseline Selection - Conform that only accepted SQl plans are used when an SQL statement is executed.
SQL Plan Baseline Evolution - Evaluated all SQL Plans (old & new) for each SQL statement. The new/old plan will only be excepted if it gives better or equal performance compared with existing trusted plan otherwise rejected. (If no trusted plan exist then current plan will be accepted)


To enable this cool feature we just do the followings:

ALTER SYSTEM set optimizer_capture_sql_plan_baselines= TRUE scope=both; [Default value is false]
ALTER SYSTEM set optimizer_user_sql_plan_baselines= TRUE scope=both; [Default value is true, so just check the parameter value]