Tuesday, November 17, 2015

To run eclipse with specific Java Version

In some cases, we need to run the eclipse instance with an another java version instead of default java version. i.e the default JAVA is jre7, but we need to run the eclipse with jre8 for the purpose of JAVA Fx application development purpose. We can overcome this situation with three different ways.

1. Changing the default Java version to our needed Java version.
2. Running the Java with the specific java version through command line
3. We can set the specific Java for specific eclipse modifying the eclipse.ini file in the eclipse folder


According to me, the last one is the best policy as for performing first  of them may affect other programs and for performing second of them is disturbing .

For running commnad line:
need to go to the location of bin folder of eclipse and executing the below command:
./eclipse -vm /usr/lib/jvm/jdk1.8.0_65/bin/



To change in the eclipse.ini file, we need to locate the file at first. Its location is in the bin folder also.
We need to add a line above the

-vmargs

at first I add directly as:

-vm  /usr/lib/jvm/jdk1.8.0_65/bin/

 above the -vmargs line. But this was not working for me. After that i used:
-vm  
/usr/lib/jvm/jdk1.8.0_65/bin/

and its working fine.

Monday, November 9, 2015

Active Session History || ASH in Oracle

The v$active_session_history view was introduced by Oracle10g, it keeps a history for recent active sessions’ activity. Oracle takes snapshots of active database sessions every second without placing serious overhead on the system. A database session is considered active by Oracle when it is consuming CPU time or waiting for an event that does not belong to the idle wait class.

Every second, Active Session History polls the database to identify the active sessions and dumps relevant information about each of them—such as the user ID, state, the machine it is connected from, and the SQL it is executing—into a special area in the system global area (SGA) of the database instance called the ASH buffer. So even though a session is no longer present in the database instance, the ASH buffer has captured its information. In addition, because Active Session History records activities every second, it can show a second-by-second snapshot of the activities in a session. In other words, Active Session History can show a movie of the activities of the session instead of a single picture. (Note that when the ASH buffer is filled, the data is written to disk and the snapshots are taken every 10 seconds rather than every second.)
You can examine the contents of the ASH buffer in a view named V$ACTIVE_SESSION_HISTORY.

This view contains a considerable amount of information that is available in thev$session view, but it also has the sample_time column that points to a time in the past when a session was doing some work or waiting for a resource.v$active_session_history view contains a single row for each session when sampling was performed.

An interesting possibility becomes available with the introduction of thev$active_session_history view in Oracle10g. With this tool, Oracle DBAs are now able to trace sessions without the need to use the well known 10046 event to perform extended tracing. All tracing can be performed now using only SQL queries without the need to review raw trace files and format them using the TKPROF utility.

Oracle keeps session history in the circular memory buffer in the SGA. This means that the greater the database activity is, the smaller the amount of time session history available in the ASH view is. In this instance, it might help that the AWR dba_hist_active_sess_history view stores the ASH history for a longer time; however, the dba_hist_active_sess_history view stores ASH data snapshots only for the times the AWR snapshots were taken.

How can the information available through the v$active_session_historyview be used?. If a session that is experiencing delays or hangs has been identified and the goal is to identify the SQL statement(s) the session is issuing, along with the wait events being experienced for a particular time period, a query similar to this one can be issued:

SELECT  C.SQL_TEXT,
        B.NAME,
        COUNT(*),
        SUM(TIME_WAITED)
FROM    v$ACTIVE_SESSION_HISTORY A,
        v$EVENT_NAME B,
        v$SQLAREA C
WHERE   A.SAMPLE_TIME BETWEEN '10-JUL-04 09:57:00 PM' AND
                              '10-JUL-04 09:59:00 PM' AND
        A.EVENT# = B.EVENT# AND
        A.SESSION_ID= 123 AND
        A.SQL_ID = C.SQL_ID
GROUP BY C.SQL_TEXT, B.NAME


Sunday, November 8, 2015

Oracle Database || ASH and AWR Performance Tuning Scripts

Listed below are some SQL queries which are useful for performance tuning. These are based on the Active Session History V$ View to get a current perspective of performance and the DBA_HIST_* AWR history tables for obtaining performance data pertaining to a period of time in the past.

Top Recent Wait Events

col EVENT format a60 

select * from (
select active_session_history.event,
sum(active_session_history.wait_time +
active_session_history.time_waited) ttl_wait_time
from v$active_session_history active_session_history
where active_session_history.event is not null
group by active_session_history.event
order by 2 desc)
where rownum < 6
/

Top Wait Events Since Instance Startup

col event format a60

select event, total_waits, time_waited
from v$system_event e, v$event_name n
where n.event_id = e.event_id
and n.wait_class !='Idle'
and n.wait_class = (select wait_class from v$session_wait_class
 where wait_class !='Idle'
 group by wait_class having
sum(time_waited) = (select max(sum(time_waited)) from v$session_wait_class
where wait_class !='Idle'
group by (wait_class)))
order by 3;

List Of Users Currently Waiting

col username format a12
col sid format 9999
col state format a15
col event format a50
col wait_time format 99999999
set pagesize 100
set linesize 120

select s.sid, s.username, se.event, se.state, se.wait_time
from v$session s, v$session_wait se
where s.sid=se.sid
and se.event not like 'SQL*Net%'
and se.event not like '%rdbms%'
and s.username is not null
order by se.wait_time;

Find The Main Database Wait Events In A Particular Time Interval

First determine the snapshot id values for the period in question.
In this example we need to find the SNAP_ID for the period 10 PM to 11 PM on the 14th of November, 2012.
select snap_id,begin_interval_time,end_interval_time
from dba_hist_snapshot
where to_char(begin_interval_time,'DD-MON-YYYY')='14-NOV-2012'
and EXTRACT(HOUR FROM begin_interval_time) between 22 and 23;
set verify off
select * from (
select active_session_history.event,
sum(active_session_history.wait_time +
active_session_history.time_waited) ttl_wait_time
from dba_hist_active_sess_history active_session_history
where event is not null
and SNAP_ID between &ssnapid and &esnapid
group by active_session_history.event
order by 2 desc)
where rownum

Top CPU Consuming SQL During A Certain Time Period

Note – in this case we are finding the Top 5 CPU intensive SQL statements executed between 9.00 AM and 11.00 AM
select * from (
select
SQL_ID,
 sum(CPU_TIME_DELTA),
sum(DISK_READS_DELTA),
count(*)
from
DBA_HIST_SQLSTAT a, dba_hist_snapshot s
where
s.snap_id = a.snap_id
and s.begin_interval_time > sysdate -1
and EXTRACT(HOUR FROM S.END_INTERVAL_TIME) between 9 and 11
group by
SQL_ID
order by
sum(CPU_TIME_DELTA) desc)
where rownum

Which Database Objects Experienced the Most Number of Waits in the Past One Hour

set linesize 120
col event format a40
col object_name format a40

select * from 
(
  select dba_objects.object_name,
 dba_objects.object_type,
active_session_history.event,
 sum(active_session_history.wait_time +
  active_session_history.time_waited) ttl_wait_time
from v$active_session_history active_session_history,
    dba_objects
 where 
active_session_history.sample_time between sysdate - 1/24 and sysdate
and active_session_history.current_obj# = dba_objects.object_id
 group by dba_objects.object_name, dba_objects.object_type, active_session_history.event
 order by 4 desc)
where rownum < 6;

Top Segments ordered by Physical Reads

col segment_name format a20
col owner format a10 
select segment_name,object_type,total_physical_reads
 from ( select owner||'.'||object_name as segment_name,object_type,
value as total_physical_reads
from v$segment_statistics
 where statistic_name in ('physical reads')
 order by total_physical_reads desc)
 where rownum

Top 5 SQL statements in the past one hour

select * from (
select active_session_history.sql_id,
 dba_users.username,
 sqlarea.sql_text,
sum(active_session_history.wait_time +
active_session_history.time_waited) ttl_wait_time
from v$active_session_history active_session_history,
v$sqlarea sqlarea,
 dba_users
where 
active_session_history.sample_time between sysdate -  1/24  and sysdate
  and active_session_history.sql_id = sqlarea.sql_id
and active_session_history.user_id = dba_users.user_id
 group by active_session_history.sql_id,sqlarea.sql_text, dba_users.username
 order by 4 desc )
where rownum

SQL with the highest I/O in the past one day

select * from 
(
SELECT /*+LEADING(x h) USE_NL(h)*/ 
       h.sql_id
,      SUM(10) ash_secs
FROM   dba_hist_snapshot x
,      dba_hist_active_sess_history h
WHERE   x.begin_interval_time > sysdate -1
AND    h.SNAP_id = X.SNAP_id
AND    h.dbid = x.dbid
AND    h.instance_number = x.instance_number
AND    h.event in  ('db file sequential read','db file scattered read')
GROUP BY h.sql_id
ORDER BY ash_secs desc )
where rownum

Top CPU consuming queries since past one day

select * from (
select 
 SQL_ID, 
 sum(CPU_TIME_DELTA), 
 sum(DISK_READS_DELTA),
 count(*)
from 
 DBA_HIST_SQLSTAT a, dba_hist_snapshot s
where
 s.snap_id = a.snap_id
 and s.begin_interval_time > sysdate -1
 group by 
 SQL_ID
order by 
 sum(CPU_TIME_DELTA) desc)
where rownum

Find what the top SQL was at a particular reported time of day

First determine the snapshot id values for the period in question.
In thos example we need to find the SNAP_ID for the period 10 PM to 11 PM on the 14th of November, 2012.
select snap_id,begin_interval_time,end_interval_time
from dba_hist_snapshot
where to_char(begin_interval_time,'DD-MON-YYYY')='14-NOV-2012'
and EXTRACT(HOUR FROM begin_interval_time) between 22 and 23;
select * from
 (
select
 sql.sql_id c1,
sql.buffer_gets_delta c2,
sql.disk_reads_delta c3,
sql.iowait_delta c4
 from
dba_hist_sqlstat sql,
dba_hist_snapshot s
 where
 s.snap_id = sql.snap_id
and
 s.snap_id= &snapid
 order by
 c3 desc)
 where rownum < 6 
/

Analyse a particular SQL ID and see the trends for the past day

select
 s.snap_id,
 to_char(s.begin_interval_time,'HH24:MI') c1,
 sql.executions_delta c2,
 sql.buffer_gets_delta c3,
 sql.disk_reads_delta c4,
 sql.iowait_delta c5,
sql.cpu_time_delta c6,
 sql.elapsed_time_delta c7
 from
 dba_hist_sqlstat sql,
 dba_hist_snapshot s
 where
 s.snap_id = sql.snap_id
 and s.begin_interval_time > sysdate -1
 and
sql.sql_id='&sqlid'
 order by c7
 /

Do we have multiple plan hash values for the same SQL ID – in that case may be changed plan is causing bad performance

select 
  SQL_ID 
, PLAN_HASH_VALUE 
, sum(EXECUTIONS_DELTA) EXECUTIONS
, sum(ROWS_PROCESSED_DELTA) CROWS
, trunc(sum(CPU_TIME_DELTA)/1000000/60) CPU_MINS
, trunc(sum(ELAPSED_TIME_DELTA)/1000000/60)  ELA_MINS
from DBA_HIST_SQLSTAT 
where SQL_ID in (
'&sqlid') 
group by SQL_ID , PLAN_HASH_VALUE
order by SQL_ID, CPU_MINS;

Top 5 Queries for past week based on ADDM recommendations

/*
Top 10 SQL_ID's for the last 7 days as identified by ADDM
from DBA_ADVISOR_RECOMMENDATIONS and dba_advisor_log
*/

col SQL_ID form a16
col Benefit form 9999999999999
select * from (
select b.ATTR1 as SQL_ID, max(a.BENEFIT) as "Benefit" 
from DBA_ADVISOR_RECOMMENDATIONS a, DBA_ADVISOR_OBJECTS b 
where a.REC_ID = b.OBJECT_ID
and a.TASK_ID = b.TASK_ID
and a.TASK_ID in (select distinct b.task_id
from dba_hist_snapshot a, dba_advisor_tasks b, dba_advisor_log l
where a.begin_interval_time > sysdate - 7 
and  a.dbid = (select dbid from v$database) 
and a.INSTANCE_NUMBER = (select INSTANCE_NUMBER from v$instance) 
and to_char(a.begin_interval_time, 'yyyymmddHH24') = to_char(b.created, 'yyyymmddHH24') 
and b.advisor_name = 'ADDM' 
and b.task_id = l.task_id 
and l.status = 'COMPLETED') 
and length(b.ATTR4) > 1 group by b.ATTR1
order by max(a.BENEFIT) desc) where rownum < 6;


Source: http://gavinsoorma.com/2012/11/ash-and-awr-performance-tuning-scripts/

Automatic Workload Repository (AWR) || How to generate AWR report

Oracle have provided many performance gathering and reporting tools over the years.In Oracle 10g statspack has evolved into the Automatic Workload Repository (AWR).

The AWR is used to collect performance statistics including:

  • Wait events used to identify performance problems.
  • Time model statistics indicating the amount of DB time associated with a process from the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views.
  • Active Session History (ASH) statistics from the V$ACTIVE_SESSION_HISTORY view.
  • Some system and session statistics from the V$SYSSTAT and V$SESSTAT views.
  • Object usage statistics.
  • Resource intensive SQL statements.

Snapshots:

By default snapshots of the relevant data are taken every hour and retained for 7 days. The default values for these settings can be altered using the following procedure.


BEGIN
DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
retention => 43200, -- Minutes (= 30 Days). Current value retained if NULL.
interval => 30); -- Minutes. Current value retained if NULL.
END;


Extra snapshots can be taken and existing snapshots can be removed, as shown below.


EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;

BEGIN
DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range (
low_snap_id => 22,
high_snap_id => 32);
END;
/

Baselines:

A baseline is a pair of snapshots that represents a specific period of usage. Once baselines are defined they can be used to compare current performance against similar periods in the past. You may wish to create baseline to represent a period of batch processing.

BEGIN
DBMS_WORKLOAD_REPOSITORY.create_baseline (
start_snap_id => 210,
end_snap_id => 220,
baseline_name => 'batch baseline');
END;
/




If you do not have Oracle Enterprise Manager tool installed then you can create the AWR reports manually using the following commands:
1. Creating Snapshot
BEGIN
DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT ();
END;
/
The list of the snapshot Ids along with database Ids is availabe in the view DBA_HIST_SNAPSHOT.
2. Dropping a Range of Snapshots.
Refer to the DBA_HIST_SNAPSHOT view column SNAP_ID to view available snapshots. To delete contain SNAP_ID from from 102 to 122,
BEGIN
DBMS_WORKLOAD_REPOSITORY.DROP_SNAPSHOT_RANGE (low_snap_id => 102, high_snap_id => 122, dbid => 8187786345);
END;
/
3. Modifying Snapshot Settings
If you want to modify the retention period as 43200 minutes (30 days), the interval between each snapshot is specified as 30 minutes, and the number of Top SQL to flush for each SQL criteria as 100 then use following:
BEGIN
DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS( retention => 43200,
interval => 30, topnsql => 100, dbid => 7123356265);
END;
/
The dbid is optional.
4. Extract the AWR Data
The awrextr.sql script extracts the AWR data for a range of snapshots from the database into a Data Pump export file. Once created, this dump file can be transported to another system where the extracted data can be loaded. To run the awrextr.sql script, you need to be connected to the database as the SYS user.
To extract AWR data at the SQL prompt, enter:
SQL> @$ORACLE_HOME/rdbms/admin/awrextr.sql
5. Load the AWR Data
Once the export dump file is transported to the target system, you can load the extracted AWR data using the awrload.sql script. The awrload.sql script will first create a staging schema where the snapshot data is transferred from the Data Pump file into the database. The data is then transferred from the staging schema into the appropriate AWR tables. To run the awrload.sqlscript, you need to be connected to the database as the SYS user.
To load AWR data at the SQL prompt, enter:
SQL> @$ORACLE_HOME/rdbms/admin/awrload.sql
6. Generate AWR Reports
The awrrpt.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids.
To generate an HTML or text report for a range of snapshot Ids, run the awrrpt.sql script at the SQL prompt:
SQL> @$ORACLE_HOME/rdbms/admin/awrrpt.sql
First, you need to specify whether you want an HTML or a text report.
Enter value for report_type: text
Specify the number of days for which you want to list snapshot Ids.
Enter value for num_days: 2
After the list displays, you are prompted for the beginning and ending snapshot Id for the workload repository report.
Enter value for begin_snap: 95
Enter value for end_snap: 97
Next, accept the default report name or enter a report name. The default name is accepted in the following example:
Enter value for report_name:
Using the report name awrrpt_1_95_97
The workload repository report is generated.
awrrpt.sql
The awrrpt.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids.
awrrpti.sql
The awrrpti.sql SQL script generates an HTML or text report that displays statistics for a range of snapshot Ids on a specified database and instance.
awrsqrpt.sql
The awrsqrpt.sql SQL script generates an HTML or text report that displays statistics of a particular SQL statement for a range of snapshot Ids. Run this report to inspect or debug the performance of a SQL statement.
awrsqrpi.sql
The awrsqrpi.sql SQL script generates an HTML or text report that displays statistics of a particular SQL statement for a range of snapshot Ids on a specified database and instance. Run this report to inspect or debug the performance of a SQL statement on a specific database and instance.
awrddrpt.sql
The awrddrpt.sql SQL script generates an HTML or text report that compares detailed performance attributes and configuration settings between two selected time periods.
awrddrpi.sql
The awrddrpi.sql SQL script generates an HTML or text report that compares detailed performance attributes and configuration settings between two selected time periods on a specific database and instance.



Workload Repository Views:

The following workload repository views are available:


  • V$ACTIVE_SESSION_HISTORY - Displays the active session history (ASH) sampled every second.
  • V$METRIC - Displays metric information.
  • V$METRICNAME - Displays the metrics associated with each metric group.
  • V$METRIC_HISTORY - Displays historical metrics.
  • V$METRICGROUP - Displays all metrics groups.
  • DBA_HIST_ACTIVE_SESS_HISTORY - Displays the history contents of the active session history.
  • DBA_HIST_BASELINE - Displays baseline information.
  • DBA_HIST_DATABASE_INSTANCE - Displays database environment information.
  • DBA_HIST_SNAPSHOT - Displays snapshot information.
  • DBA_HIST_SQL_PLAN - Displays SQL execution plans.
  • DBA_HIST_WR_CONTROL - Displays AWR settings.

Re-sizing Redo Logs

Every DBA knows that the size of their redo logs is very important.  Too small, and the frequent log switches tie-up the LGWR, ARCH and DBWR background processes.  Too large and you risk losing data during an instance crash.

As a general rule, you size your online redo logs not to switch more then 5 times per hour during peak DML times.  Here is a script that measures redo log sizing and log switches.


Some things to consider with changing the size of online redologs:


You do it for performance reasons – mostly because your original files are too small and you get log switches too often

It is an Online Operation – no end user will even notice about it – apart from maybe better performance afterwards
You will not put your Database at risk with it – if you do it right :-)

Let me give you an example:


Current redo file information is as below:




SQL> select group#,members,status,bytes/1024/1024 as mb from v$log;

 GROUP#    MEMBERS STATUS                   MB

---------- ---------- ---------------- ----------
 1          1 INACTIVE                 50
 2          1 INACTIVE                 50
 3          1 CURRENT                  50


SQL> select group#,member from v$logfile;


 GROUP# MEMBER

---------- ------------------------------------------------------------
 3 /u01/app/oracle/oradata/orcl/redo03.log
 2 /u01/app/oracle/oradata/orcl/redo02.log
 1 /u01/app/oracle/oradata/orcl/redo01.log

Now , to get 100m sized logfiles and we want them mirrored[Multiplexing] for security reason. First, we create new log groups of the desired size:


SQL> alter database add logfile
     ('/u01/app/oracle/oradata/orcl/redo_g4m1.rdo',
      '/u01/app/oracle/oradata/orcl/redo_g4m2.rdo')
     size 100m;

Database altered.


SQL> alter database add logfile

     ('/u01/app/oracle/oradata/orcl/redo_g5m1.rdo',
      '/u01/app/oracle/oradata/orcl/redo_g5m2.rdo')
     size 100m;

Database altered.

SQL> alter database add logfile
     ('/u01/app/oracle/oradata/orcl/redo_g6m1.rdo',
      '/u01/app/oracle/oradata/orcl/redo_g6m2.rdo')
     size 100m;

Database altered.


SQL> select group#,members,status,bytes/1024/1024 as mb from v$log;


 GROUP#    MEMBERS STATUS                   MB

---------- ---------- ---------------- ----------
 1          1 INACTIVE                 50
 2          1 INACTIVE                 50
 3          1 CURRENT                  50
 4          2 UNUSED                  100
 5          2 UNUSED                  100
 6          2 UNUSED                  100

6 rows selected.


Here we are using suffix rdo as it is not so easy misinterpreted as a mere ascii textfile that can be deleted if space is needed.


Now we are going to drop the small files – with SQL commands first!



SQL> alter database drop logfile group 1;
Database altered.
That was easy, wasn’t it? Notice that the OS file is not gone yet  – we have to delete them manually (if they are not OMF). All groups that have the status INACTIVE can be dropped that way.


SQL> select group#,members,status,bytes/1024/1024 as mb from v$log;

 GROUP#    MEMBERS STATUS                   MB

---------- ---------- ---------------- ----------
 2          1 INACTIVE                 50
 3          1 CURRENT                  50
 4          2 UNUSED                  100
 5          2 UNUSED                  100
 6          2 UNUSED                  100

SQL> alter database drop logfile group 2;


Database altered.
If the status is CURRENT or ACTIVE, it won’t be possible to drop the group – you can do nothing wrong here!



SQL>  alter database drop logfile group 3;

 alter database drop logfile group 3
*
ERROR at line 1:
ORA-01623: log 3 is current log for instance orcl (thread 1) - cannot drop
ORA-00312: online log 3 thread 1: '/u01/app/oracle/oradata/orcl/redo03.log'
CURRENT means that LGWR is actually using this group to write the redo protocol into. If a log switch happens, the status can become ACTIVE or INACTIVE:


SQL> alter system switch logfile;

System altered.


SQL> select group#,members,status,bytes/1024/1024 as mb from v$log;


 GROUP#    MEMBERS STATUS                   MB

---------- ---------- ---------------- ----------
 3          1 ACTIVE                   50
 4          2 CURRENT                 100
 5          2 UNUSED                  100
 6          2 UNUSED                  100

ACTIVE means, the checkpoint associated to the redo protocol in that group is not yet finished. It is in other words still needed for a possible instance recovery and cant be dropped therefore. But we can force the checkpoint:


SQL> alter system checkpoint;

System altered.

SQL> select group#,members,status,bytes/1024/1024 as mb from v$log;

 GROUP#    MEMBERS STATUS                   MB

---------- ---------- ---------------- ----------
 3          1 INACTIVE                 50
 4          2 CURRENT                 100
 5          2 UNUSED                  100
 6          2 UNUSED                  100

SQL> alter database drop logfile group 3;

Database altered.
Now we are finished with my SQL commands and we can continue with deleting the files of the small groups from the OS. The instance was OPEN all the time and no user was in any way concerned by this whole action. By the way, we mirrored the log groups to protect my database against really serious problems that would arise otherways, if a whole log group gets lost – for example because someone deletes a file from the OS.

Friday, November 6, 2015

Cloning Oracle Database

CPU intensive reports should be generated to another database than the production database. For such type of cases we need to create another db for processing purpose. As the effects the performance negatively in the production server.

1.set ORACLE_SID='dbname'

2.sqlplus / as sysdba

3.SQL> Select name from v$database;
>> dbname
4.Select log_mode from v$database;

5. in case of archive log we will get >> ARCHIVELOG

6.SQL> archive log list

7. For the cloning purpose, we need to create the DB folder in the corresponding location as per the instance. Let assume we want to clone the database in the same server with the name dbname2, For this reason, we will create a folder with this name in the oradata folder , inside this directory we will create the ARCHIVED_LOGS directory also.

8. Then in the oracle_home location ..\db_1\database make a file names initdbname2.ora following the another init file. The content of the file should be db_name=dbname2 [as per our example]

9. Now we need to make a instance for this newly created database.
oradim -new -sid dbname2
net start OracleServicedbname2
[For windows environment; not for Unix or Linux Enviroment]

10. Now again after going back to my main databse,
SQL> Selct * from v$backup;
SQL> select tablespace_name from dba_tablespaces;

11. To make the main db to backupmode:
SQL> alter database begin backup;

12. now after executing the below query:
      SQL> Select * from v$backup;
      we get the status of the backup in the active mode

13. Now going to the file location of the database oradata\dbname folder
      cp *dbf /usr/home/usrname/filelocation

14. now again through sqlplus we need to stop the back mode of the main database:
     SQL> alter database end backup;
     after that for checking we can use Select * from v$backup; which will indicate ‘Not Active’

15. SQL> alter system archived log current

16. SQL> alter system backup controfile to trace as ‘/usr/home/usrname/filelocation/trace.sql’ reuse;

17. Now in the trace.sql file, we need to change the dbname by dbname2.
    and CREATE CONTROLFILE REUSE DATABASE dbname NORESETLOGS should be        
    changed as REATE CONTROLFILE SET DATABASE dbname RESETLOGS

18. Now we can copy the already copied database to dbname2 file location
      cp /usr/home/usrname/filelocation/* to .
19. Now we will access dname2 by sqlplus
SQL> set ORALCE_SID=dbname2
SQL> sqlplus / as sysdba
SQL>@/usr/home/usrname/filelocation/trace.sql;

database will be started successfully. No we can get all the database of dbname in dbname2
SQL> Select name from v$database;
SQL> alter system archive log current;

now we need to take the archive log of the dbname for dbname2 for recover the database.
SQL> set log source ‘arhive log direcotry of dbname’
SQL> recover database using backup control file until cancel;
SQL> alter database open reset logs;
SQL> Select Open_Mode from v$database;

Redo Log File || Operations

Redo Log files records every change made by users. This guarantees that we will not lose any change, even if it was not written in Data Files due to instance failure.The most crucial structure for recovery operations is the redo log, which consists of two or more preallocated files that store all changes made to the database as they occur. Every instance of an Oracle Database has an associated redo log to protect the database in case of an instance failure.
Redo log files are filled with redo records. A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the database.Redo records are buffered in a circular fashion in the redo log buffer of the SGA and are written to one of the redo log files by the Log Writer (LGWR) database background process. When the current redo log file fills, LGWR begins writing to the next available redo log file. When the last available redo log file is filled, LGWR returns to the first redo log file and writes to it, starting the cycle again.

The redo log of a database consists of two or more redo log files. The database requires a minimum of two files to guarantee that one is always available for writing while the other is being archived (if the database is in ARCHIVELOG mode).

As the part of circular writting, a log switch is the point at which the database stops writing to one redo log file and begins writing to another. Normally, a log switch occurs when the current redo log file is completely filled and writing must continue to the next redo log file. Oracle Database assigns each redo log file a new log sequence number every time a log switch occurs and LGWR begins writing to it.

Multiplexing is the procedure to keep the multiple copy of the redo log files for the confirmation of recovery steps in case of failure. It is implemented by creating groups of redo log files. A group consists of a redo log file and its multiplexed copies. Each identical copy is said to be a member of the group. Each redo log group is defined by a number, such as group 1, group 2, and so on.

For the new allocation, Block Size is maintained for redo log files. It might be 4K and 512B.
However, with a block size of 4K, there is increased redo wastage. In fact, the amount of redo wastage in 4K blocks versus 512B blocks is significant. You can determine the amount of redo wastage by viewing the statistics stored in the V$SESSTAT and V$SYSSTAT views. SQL> SELECT name, value FROM v$sysstat WHERE name = 'redo wastage';

Beginning with Oracle Database 11g Release 2, you can specify the block size of online redo log files with the BLOCKSIZE keyword in the CREATE DATABASE, ALTER DATABASE, and CREATE CONTROLFILE statements. The permissible block sizes are 512, 1024, and 4096.

Adding Redo Log File:

To create a new group of redo log files, use the SQL statement ALTER DATABASE with the ADD LOGFILE clause. The following statement adds a new group of redo logs to the database: ALTER DATABASE ADD LOGFILE ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 100M; You can also specify the number that identifies the group using the GROUP clause: ALTER DATABASE ADD LOGFILE GROUP 10 ('/oracle/dbs/log1c.rdo', '/oracle/dbs/log2c.rdo') SIZE 100M BLOCKSIZE 512; To create new redo log members for an existing group, use the SQL statement ALTER DATABASE with the ADD LOGFILE MEMBER clause. The following statement adds a new redo log member to redo log group number 2: ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2b.rdo' TO GROUP 2; Notice that filenames must be specified, but sizes need not be. The size of the new members is determined from the size of the existing members of the group. When using the ALTER DATABASE statement, you can alternatively identify the target group by specifying all of the other members of the group in the TO clause, as shown in the following example: ALTER DATABASE ADD LOGFILE MEMBER '/oracle/dbs/log2c.rdo' TO ('/oracle/dbs/log2a.rdo', '/oracle/dbs/log2b.rdo');


Steps for Renaming Redo Log Members:

  • Shut Down the database: SHUTDOWN
  • Copy the redo log files to the new location.
  • Startup the database, mount, but do not open it.
CONNECT / as SYSDBA STARTUP MOUNT
  • Rename the redo log members.
  • Use the ALTER DATABASE statement with the RENAME FILE clause to rename the database redo log files.
ALTER DATABASE
RENAME FILE '/diska/logs/log1a.rdo', '/diska/logs/log2a.rdo'
TO '/diskc/logs/log1c.rdo', '/diskc/logs/log2c.rdo';

  • Open the database for normal operation.
  • The redo log alterations take effect when the database is opened.
ALTER DATABASE OPEN;

Dropping Redo log File;

To drop a redo log member, you must have the ALTER DATABASE system privilege. ALTER DATABASE DROP LOGFILE MEMBER '/oracle/dbs/log3c.rdo'; When a redo log member is dropped from the database, the operating system file is not deleted from disk.


Forcing Log Switches: ALTER SYSTEM SWITCH LOGFILE;

Clearning Log File:

The following statement clears the log files in redo log group number 3: ALTER DATABASE CLEAR LOGFILE GROUP 3; If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the statement. ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;


Redo Log Data Dictionary Views:

The following views provide information on redo logs.
View
Description
V$LOG
Displays the redo log file information from the control file
V$LOGFILE
Identifies redo log groups and members and member status
V$LOG_HISTORY
Contains log history information

Thursday, November 5, 2015

Target: Exporting and importing Oracle Data

  1. Create directory for making location:
create directory as dirname ‘/usr/paul/home/location’

   2. Grant permission on that directory
grant read, write on directory dirname to username

  3. Export command
expdp user_id/pass@db_sid tables=, DIRECTORY=
DUMPFILE= LOGFILE=

  4. Import Command:
impdp user_id/pass@db_sid tables=, DIRECTORY=
DUMPFILE= LOGFILE=