Showing posts with label Short Note. Show all posts
Showing posts with label Short Note. Show all posts

Wednesday, September 4, 2019

Generate Incident Report in Oracle Database


The Automatic Diagnostics Repository (ADR) is a hierarchical file-based repository for handling diagnostic information. 

Directory structure is as:

$ADR_BASE/diag/rdbms/{DB-name}/{SID}/alert
$ADR_BASE/diag/rdbms/{DB-name}/{SID}/cdump
$ADR_BASE/diag/rdbms/{DB-name}/{SID}/hm
$ADR_BASE/diag/rdbms/{DB-name}/{SID}/incident
$ADR_BASE/diag/rdbms/{DB-name}/{SID}/trace
$ADR_BASE/diag/rdbms/{DB-name}/{SID}/{others}

To generate incident report quickly, we can follow the below steps:

adrci> show problem
adrci> show incident

adrci> show incident -mode detail -p "incident_id=incident_no" 
adrci> ips create package problem <problem_id>correlate all
adrci> ips generate package  in "/tmp"



Friday, November 18, 2016

How to kill a process on a port on ubuntu

Sometimes, when we strying to up an service and get message like: the port *** is used by another service. And we are notable to start our app to use that port. In that case, we are in need to free the port by stopping the running process. For this purpose a first we need to find out the process which is using that port.

By using the below query we can get the process id using this port:

sudo lsof -t -i:8080 [For this case 8080 is our reference port number]

to get the process in details, we can use:
ps -ef | grep


And then we can kill the process by :
kill -9

We can use the below command at once to kill the process:
sudo kill $(sudo lsof -t -i:8080)

Thursday, November 17, 2016

Sheel Script for Deleting files older than some days

find is the common tool for this kind of task :

find ./target_dir ! -name 'stayhere.txt' -mtime +5 -type f -delete


EXPLANATIONS

./target_dir your directory (replace with your own)
! -name File to exclude
-mtime +5 older than 5 days
-type f only files
-delete no surprise. Remove it to test your find filter before executing the whole command
And take care that ./target_dir exists to avoid bad surprises !

Tuesday, April 26, 2016

Find and kill a process in one line using bash

For operation and other pusposes, we need to kill process in linux system. For this purpose, at first we find the process id then we kill the process with the process id. i.e

[agentrpt@localhost ~]$ ps -ef | grep 'jar'
agentrpt 13131     1  0 16:49 ?        00:00:00 java -jar ABS_ALERT_MIDDLEWARE.jar
agentrpt 13278     1  0 16:54 ?        00:00:00 java -jar SMSMailClient.jar
agentrpt 13402 13367  0 16:58 pts/0    00:00:00 grep jar

For killing process of java -jar SMSMailClient.jar, We need to executge command as:
kill 13278

We can do it in a simple way. For the using the script regulary we can create a script with the below command.

In bash, you should be able to do:

kill $(ps aux | grep '[SMS]MailClient.jar' | awk '{print $2}')
Details on its workings are as follows:

The ps gives you the list of all the processes.
The grep filters that based on your search string, [SMS] is a trick to stop you picking up the actual grep process itself.
The awk just gives you the second field of each line, which is the PID.

Thursday, April 21, 2016

Remove all files/directories except for one file

find . ! -name 'filetoexist.txt' -type f -exec rm -f {} +

will remove all files except filetoexist.txt. To remove directories, change -type f to -type d and add -r option to rm.

To exclude list of files: ! \( -name one_file -o -name two_file \)

In bash, to use rm !(file.txt), we will have to enable extglob:

$ shopt -s extglob
$ rm !(file.txt)

Note that extglob only works in bash and Korn shell family. And using rm !(file.txt) can cause an Argument list too long error.

In zsh, you can use ^ to negate pattern with extendedglob enabled:

$ setopt extendedglob
$ rm ^file.txt

or using the same syntax with ksh and bash with options ksh_glob and no_bare_glob_qual enabled.

Wednesday, April 20, 2016

To get IP Address of Sun Server

For a normal user (i.e., not 'root') ifconfig isn't in his path, but it's the command.
More specifically: /usr/sbin/ifconfig -a
If we want the IP Address easily, we need some scripting as:
/usr/sbin/ifconfig -a | awk 'BEGIN { count=0; } { if ( $1 ~ /inet/ ) { count++; if( count==2 ) { print $2; } } }'

Monday, April 18, 2016

Modifying file using Vim and awk

In some case, we have to change a big file with the same type of text. We can use vim editor for doing this type of task. Suppose, we have a files as below:


Now for a need, we need to generate delete script from this file contents as below:


For doing so, 

1. we have opened the file using vim editor. vim loglistws.txt
2.Then we have used the below commnads to add double quate around the words : 
   :%s/EJB_Str/"EJB_Str/g
   
Then save the file using :w
   :%s/\.log/\.log"/g

3. Now save the file using :wq!
4. Now we have made a new file form the existing file as:
    cat loglistws.txt | awk '{print "rm ",$9,$10,$11,$12}' > myscriptws.sh

Wednesday, March 30, 2016

Archive Redo Log

An Oracle database can run in one of two modes. By default, the database is created in NOARCHIVELOG mode. Oracle Database lets you save filled groups of redo log files to one or more offline destinations using its ARCH process, known collectively as the archived redo log, or more simply the archive log. The process of turning redo log files into archived redo log files is called archiving.

When in NOARCHIVELOG mode the database runs normally, but there is no capacity to perform any type of point in time recovery operations or online backups. Thus, you have to shutdown the database to back it up, and when you recover the database you can only recover it to the point of the last backup. While this might be fine for a development environment, the big corporate types tend to frown when a weeks worth of current production accounting data is lost forever. We can check the status of the archiving mode of a database suing hte following query:

SQL> archive log list;
SQL> select log_mode from v$database;
we can also with the below commnad to find the process:
$ ps -ef|grep -i _arc

Sunday, November 1, 2015

Commands to get Hardware Information in Linux

1. lshw - List Hardware

    Lshw extracts the information from different /proc files.It reports detailed and brief information about multiple different hardware units such as cpu, memory, disk, usb controllers, network adapters etc.

2. lscpu

    This command reports information about the cpu and processing units. It does not have any further options or functionality.

3. hwinfo - Hardware Information

    Hwinfo is another general purpose hardware probing utility that can report detailed and brief information about multiple different hardware components, and more than what lshw can report.
   Command: $ hwinfo --short

4. Inxi

Inxi is a 10K line mega bash script that fetches hardware details from multiple different sources and commands on the system, and generates a beautiful looking report that non technical users can read easily.
    command: $ inxi -Fx

5. lspci - List PCI

    The lspci command lists out all the pci buses and details about the devices connected to them. The vga adapter, graphics card, network adapter, usb ports, sata controllers, etc all fall under this category.
    Filter out specific device information with grep.
   Command: $ lspci -v | grep "VGA" -A 12

6. lsscsi - List scsi devices

    Lists out the scsi/sata devices like hard drives and optical drives.

7. lsusb - List usb buses and device details

    This command shows the USB controllers and details about devices connected to them. By default brief information is printed. Use the verbose option "-v" to print detailed information about each usb port



8. lsblk - List block devices

To get the the list of all block devices, which are the hard drive partitions and other storage devices like optical drives and flash drives

9. df - disk space of file systems

Various partitions Information, their mount points and the used and available space on each.
    command: $ df -H

10. fdisk

To modify hard drive partitions Fdisk is a utility, and can be used to list out the partition information.
    command: $ sudo fdisk -l

11. Pydf - Python df

Its wriiten in python, that displays colored output that looks better than df


12. mount

The mount is used to mount/unmount and view mounted file systems.
    command: $ mount | column -t
                      $ mount | column -t | grep ext

13. free - Check RAM

Check the amount of used, free and total amount of RAM on system with the free command.
    command: $ free -m

14. hdparm

The hdparm command gets information about sata devices like hard disks.
    commnad: $ sudo hdparm -i /dev/sda

15. dmidecode

The dmidecode command is different from all other commands. It extracts hardware information by reading data from the SMBOIS data structures (also called DMI tables).
Commands:
    # display information about the processor/cpu
    $ sudo dmidecode -t processor

    # memory/ram information
    $ sudo dmidecode -t memory

    # bios details
    $ sudo dmidecode -t bios

16. /proc files

Many of the virtual files in the /proc directory contain information about hardware and configurations. Here are some of them   
    Commands:
    # memory information
    $ cat /proc/meminfo
   
    # cpu information
    $ cat /proc/cpuinfo

    #Partition Information
    $ cat /proc/partitions

    #Linux or Kernel Verison
    $ cat /proc/version

    #SCSI / Sata Devices
    $ cat /proc/scsi/scsi
   





Tuesday, August 4, 2015

File system in Oracle Database

Different types of database files.

1.Parameter Files:  It tells how the instance is configured that how big the SGA. at the start up data base, it uses the parameter file for  configurations. It also tells how my Database process in the DB writer.

2. Data files: This stores all of the data of user. Such as tables, table data are being consisted by Data files. Its a operating system file. Each database must contain at least one datafile.

3. Redo log files:  Its the transaction log of database. When any operation is being performed at database, then a transaction is being created at Redo log files. Using Redo log files we can recover the Database for instance or media failure. It contains all the information for revert back of database.

4. Control Files: It tells the instance where the datafiles and redo log files exist. Instance read the control files information to locate where those files. For the importance of its role, 4 copy of control files are saved. Its called Multiplexing of control files.

5.Temp Files:  For
 the temporary requirement of some operation, instance need some temporary storage. Temp files arrange that space. i.e for order by query processing purpose, it might be needed.


6. Password Files: its used to authenticate users performing administrative [Start up/ Shut down Database]task over network.

7. Trace Files/Alert Log: There are many background process running behind the database. For various purpose to get the status of those process, we can take help of trace files. As for unexpected error or failure, trace file contain the status of those services.

Tuesday, July 28, 2015

Find and Locate to search files in Linux System

Finding by Name
The most obvious way of searching for files is by name.

         find -name "query"

This will be case sensitive, meaning a search for "file" is different than a search for "File".

To find a file by name, but ignore the case of the query, type:

          find -iname "query"

If you want to find all files that don't adhere to a specific pattern, you can invert the search with "-not" or "!". If you use "!", you must escape the character so that bash does not try to interpret it before find can act:

          find -not -name "query_to_avoid"

Or

          find \! -name "query_to_avoid"

Finding by Type
You can specify the type of files you want to find with the "-type" parameter. It works like this:

          find -type type_descriptor query

Some of the most common descriptors that you can use to specify the type of file are here:

           f: regular file

           d: directory

           l: symbolic link

           c: character devices

           b: block devices

For instance, if we wanted to find all of the character devices on our system, we could issue this command:

find / -type c
/dev/parport0
/dev/snd/seq
/dev/snd/timer
/dev/autofs
/dev/cpu/microcode
/dev/vcsa7
/dev/vcs7
/dev/vcsa6
/dev/vcs6
/dev/vcsa5
/dev/vcs5
/dev/vcsa4
. . .
We can search for all files that end in ".conf" like this:

        find / -type f -name "*.conf"

/var/lib/ucf/cache/:etc:rsyslog.d:50-default.conf
/usr/share/base-files/nsswitch.conf
/usr/share/initramfs-tools/event-driven/upstart-jobs/mountall.conf
/usr/share/rsyslog/50-default.conf
/usr/share/adduser/adduser.conf
/usr/share/davfs2/davfs2.conf
/usr/share/debconf/debconf.conf
/usr/share/doc/apt-utils/examples/apt-ftparchive.conf
. . .

Filtering by Time and Size
Find gives you a variety of ways to filter results by size and time.

by Size

You can filter by size with the use of the "-size" parameter.

We add a suffix on the end of our value that specifies how we are counting. These are some popular options:

c: bytes

k: Kilobytes

M: Megabytes

G: Gigabytes

b: 512-byte blocks

To find all files that are exactly 50 bytes, type:
find / -size 50c

To find all files less than 50 bytes, we can use this form instead:
find / -size -50c

To Find all files more than 700 Megabytes, we can use this command:

find / -size +700M


By Time

Linux stores time data about access times, modification times, and change times.

Access Time: Last time a file was read or written to.

Modification Time: Last time the contents of the file were modified.

Change Time: Last time the file's inode meta-data was changed.

We can use these with the "-atime", "-mtime", and "-ctime" parameters. These can use the plus and minus symbols to specify greater than or less than, like we did with size.

The value of this parameter specifies how many days ago you'd like to search.

To find files that have a modification time of a day ago, type:

find / -mtime 1
If we want files that were accessed in less than a day ago, we can type:

find / -atime -1
To get files that last had their meta information changed more than 3 days ago, type:

find / -ctime +3
There are also some companion parameters we can use to specify minutes instead of days:

find / -mmin -1

This will give the files that have been modified type the system in the last minute.

Find can also do comparisons against a reference file and return those that are newer:

find / -newer myfile

Finding by Owner and Permissions
You can also search for files by the file owner or group owner.

You do this by using the "-user" and "-group" parameters respectively. Find a file that is owned by the "syslog" user by entering:

find / -user syslog

Similarly, we can specify files owned by the "shadow" group by typing:
find / -group shadow
We can also search for files with specific permissions.

If we want to match an exact set of permissions, we use this form:
find / -perm 644

This will match files with exactly the permissions specified.

If we want to specify anything with at least those permissions, you can use this form:
find / -perm -644

This will match any files that have additional permissions. A file with permissions of "744" would be matched in this instance.

Filtering by Depth
For this section, we will create a directory structure in a temporary directory. It will contain three levels of directories, with ten directories at the first level. Each directory (including the temp directory) will contain ten files and ten subdirectories.

Make this structure by issuing the following commands:

cd
mkdir -p ~/test/level1dir{1..10}/level2dir{1..10}/level3dir{1..10}
touch ~/test/{file{1..10},level1dir{1..10}/{file{1..10},level2dir{1..10}/{file{1..10},level3dir{1..10}/file{1..10}}}}
cd ~/test
Feel free to check out the directory structures with ls and cd to get a handle on how things are organized. When you are finished, return to the test directory:

cd ~/test
We will work on how to return specific files from this structure. Let's try an example with just a regular name search first, for comparison:

find -name file1
./level1dir7/level2dir8/level3dir9/file1
./level1dir7/level2dir8/level3dir3/file1
./level1dir7/level2dir8/level3dir4/file1
./level1dir7/level2dir8/level3dir1/file1
./level1dir7/level2dir8/level3dir8/file1
./level1dir7/level2dir8/level3dir7/file1
./level1dir7/level2dir8/level3dir2/file1
./level1dir7/level2dir8/level3dir6/file1
./level1dir7/level2dir8/level3dir5/file1
./level1dir7/level2dir8/file1
. . .
There are a lot of results. If we pipe the output into a counter, we can see that there are 1111 total results:

find -name file1 | wc -l
1111
This is probably too many results to be useful to you in most circumstances. Let's try to narrow it down.

You can specify the maximum depth of the search under the top-level search directory:

find -maxdepth num -name query

To find "file1" only in the "level1" directories and above, you can specify a max depth of 2 (1 for the top-level directory, and 1 for the level1 directories):

find -maxdepth 2 -name file1
./level1dir7/file1
./level1dir1/file1
./level1dir3/file1
./level1dir8/file1
./level1dir6/file1
./file1
./level1dir2/file1
./level1dir9/file1
./level1dir4/file1
./level1dir5/file1
./level1dir10/file1
That is a much more manageable list.

You can also specify a minimum directory if you know that all of the files exist past a certain point under the current directory:

find -mindepth num -name query
We can use this to find only the files at the end of the directory branches:

find -mindepth 4 -name file
./level1dir7/level2dir8/level3dir9/file1
./level1dir7/level2dir8/level3dir3/file1
./level1dir7/level2dir8/level3dir4/file1
./level1dir7/level2dir8/level3dir1/file1
./level1dir7/level2dir8/level3dir8/file1
./level1dir7/level2dir8/level3dir7/file1
./level1dir7/level2dir8/level3dir2/file1
. . .
Again, because of our branching directory structure, this will return a large number of results (1000).

You can combine the min and max depth parameters to focus in on a narrow range:

find -mindepth 2 -maxdepth 3 -name file
./level1dir7/level2dir8/file1
./level1dir7/level2dir5/file1
./level1dir7/level2dir7/file1
./level1dir7/level2dir2/file1
./level1dir7/level2dir10/file1
./level1dir7/level2dir6/file1
./level1dir7/level2dir3/file1
./level1dir7/level2dir4/file1
./level1dir7/file1
. . .
Executing and Combining Find Commands
You can execute an arbitrary helper command on everything that find matches by using the "-exec" parameter. This is called like this:

find find_parameters -exec command_and_params {} \;
The "{}" is used as a placeholder for the files that find matches. The "\;" is used so that find knows where the command ends.

For instance, we could find the files in the previous section that had "644" permissions and modify them to have "664" permissions:

cd ~/test
find . -type f -perm 644 -exec chmod 664 {} \;
We could then change the directory permissions like this:

find . -type d -perm 755 -exec chmod 700 {} \;
If you want to chain different results together, you can use the "-and" or "-or" commands. The "-and" is assumed if omitted.

find . -name file1 -or -name file9 


Find Files Using Locate:
An alternative to using find is the locate command. This command is often quicker and can search the entire file system with ease.

You can install the command with apt-get:

sudo apt-get update
sudo apt-get install mlocate

The reason locate is faster than find is because it relies on a database of the files on the filesystem.

The database is usually updated once a day with a cron script, but you can update it manually by typing:

sudo updatedb
Run this command now. Remember, the database must always be up-to-date if you want to find recently acquired or created files.

To find files with locate, simply use this syntax:
locate query

You can filter the output in some ways.
For instance, to only return files containing the query itself, instead of returning every file that has the query in the directories leading to it, you can use the "-b" for only searching the "basename":

locate -b query

To have locate only return results that still exist in the filesystem (that were not remove between the last "updatedb" call and the current "locate" call), use the "-e" flag:

locate -e query

To see statistics about the information that locate has cataloged, use the "-S" option:
locate -S

Database /var/lib/mlocate/mlocate.db:
    3,315 directories
    37,228 files
    1,504,439 bytes in file names
    594,851 bytes used to store database

Sunday, April 12, 2015

Perm Space in Java

It stands for permanent generation.

The permanent generation is special because it holds meta-data describing user classes (classes that are not part of the Java language). Examples of such meta-data are objects describing classes and methods and they are stored in the Permanent Generation. Applications with large code-base can quickly fill up this segment of the heap which will cause java.lang.OutOfMemoryError: PermGen no matter how high your -Xmx and how much memory you have on the machine.

The permanent Generation contains the following class information:
  • Methods of a class.
  • Names of the classes.
  • Constants pool information.
  • Object arrays and type arrays associated with a class.
  • Internal objects used by JVM.
  • Information used for optimization by the compilers.
‘Java.Lang.OutOfMemoryError: PermGen Space’ occurs when JVM needs to load the definition of a new class and there is no enough space in PermGen. The default PermGen Space allocated is 64 MB for server mode and 32 MB for client mode. There could be 2 reasons why PermGen Space issue occurs.
The 1st reason could be your application or your server has too many classes and the existing PermGen Space is not able to accommodate all the classes. And the 2nd reason could be memory leak. How the class definitions that are loaded could can become unused.

Friday, April 10, 2015

Heap Space in Java

When a Java program started Java Virtual Machine gets some memory from Operating System. Java Virtual Machine or JVM uses this memory for all its need and part of this memory is call java heap memory. Heap in Java generally located at bottom of address space and move upwards. whenever we create object using new operator or by any another means object is allocated memory from Heap and When object dies or garbage collected ,memory goes back to Heap space in Java.

1. Java Heap Memory is part of memory allocated to JVM by Operating System.


2. Whenever we create objects they are created inside Heap in Java.

3. Java Heap space is divided into three regions or generation for sake of garbage collection,called New Generation, Old or tenured Generation or Perm Space. Permanent generation is garbage collected during full gc in hotspot JVM.

4. You can increase or change size of Java Heap space by using JVM command line option -Xms, -Xmx and -Xmn. don't forget to add word "M" or "G" after specifying size to indicate Mega or Gig. for example you can set java heap size to 258MB by executing following command java -Xmx256m HelloWord.

5. You can use command "jmap" to take Heap dump in Java and "jhat" to analyze that heap dump.

6. Java Heap space is different than Stack which is used to store call hierarchy and local variables.

7. Java Garbage collector is responsible for reclaiming memory from dead object and returning to Java Heap space.

8. Don’t panic when you get java.lang.OutOfMemoryError, sometimes its just matter of increasing heap size but if it’s recurrent then look for memory leak in Java.

9. Use Profiler and Heap dump Analyzer tool to understand Java Heap space and how much memory is allocated to each object.