Showing posts with label Funda. Show all posts
Showing posts with label Funda. Show all posts

Monday, March 12, 2018

Difference between compiled and interpreted language

The difference is not in the language; it is in the implementation.


In a compiled implementation, the original program is translated into native machine instructions, which are executed directly by the hardware.


In an interpreted implementation, the original program is translated into something else. Another program, called "the interpreter", then examines "something else" and performs whatever actions are called for. Depending on the language and its implementation, there are a variety of forms of "something else". From more popular to less popular, "something else" might be


Binary instructions for a virtual machine, often called bytecode, as is done in Lua, Python, Ruby, Smalltalk, and many other systems (the approach was popularized in the 1970s by the UCSD P-system and UCSD Pascal)

A tree-like representation of the original program, such as an abstract-syntax tree, as is done for many prototype or educational interpreters

A tokenized representation of the source program, similar to Tcl

The characters of the source program, as was done in MINT and TRAC


One thing that complicates the issue is that it is possible to translate (compile) bytecode into native machine instructions. Thus, a successful intepreted implementation might eventually acquire a compiler. If the compiler runs dynamically, behind the scenes, it is often called a just-in-time compiler or JIT compiler. JITs have been developed for Java, JavaScript, Lua, and I daresay many other languages. At that point you can have a hybrid implementation in which some code is interpreted and some code is compiled.




Advantages of the compiled languages

  • Speed
  • Native applications are secured
  • Large softwares are written in Compiled Languages
  • Reflection is not impossible 
  • Interoperability is possible with .NET, Java and Python

Monday, May 1, 2017

Tips for Writing Your Research Proposal


1. Know yourself: Know your area of expertise, what are your strengths and what are your weaknesses. Play to your strengths, not to your weaknesses. If you want to get into a new area of research, learn something about the area before you write a proposal. Research previous work. Be a scholar.
.
2. Know the program from which you seek support: You are responsible for finding the appropriate program for support of your research. 
.
3. Read the program announcement: Programs and special activities have specific goals and specific requirements. If you don’t meet those goals and requirements, you have thrown out your chance of success. Read the announcement for what it says, not for what you want it to say. If your research does not fit easily within the scope of the topic areas outlined, your chance of success is nil.
.
4. Formulate an appropriate research objective: A research proposal is a proposal to conduct research, not to conduct development or design or some other activity. Research is a methodical process of building upon previous knowledge to derive or discover new knowledge, that is, something that isn’t known before the research is conducted. 
.
5. Develop a viable research plan: A viable research plan is a plan to accomplish your research objective that has a non-zero probability of success. The focus of the plan must be to accomplish the research objective.
.
6. State your research objective clearly in your proposal: A good research proposal includes a clear statement of the research objective. Early in the proposal is better than later in the proposal. The first sentence of the proposal is a good place. A good first sentence might be, “The research objective of this proposal is...” Do not use the word “develop” in the statement of your research objective. 
.
7. Frame your project around the work of others: Remember that research builds on the extant knowledge base, that is, upon the work of others. Be sure to frame your project appropriately, acknowledging the current limits of knowledge and making clear your contribution to the extension of these limits. Be sure that you include references to the extant work of others. 
.
8. Grammar and spelling count: Proposals are not graded on grammar. But if the grammar is not perfect, the result is ambiguities left to the reviewer to resolve. Ambiguities make the proposal difficult to read and often impossible to understand, and often result in low ratings. Be sure your grammar is perfect. 
.
9. Format and brevity are important: Do not feel that your proposal is rated based on its weight. Use 12-point fonts, use easily legible fonts, and use generous margins. Take pity on the reviewers. Make your proposal a pleasant reading experience that puts important concepts up front and makes them clear. Use figures appropriately to make and clarify points, but not as filler. 
.
10. Know the review process: Know how your proposal will be reviewed before you write it. Proposals that are reviewed by panels must be written to a broader audience than proposals that will be reviewed by mail. Mail review can seek out reviewers with very specific expertise in very narrow disciplines. 
.
11. Proof read your proposal before it is sent: Many proposals are sent out with idiotic mistakes, omissions, and errors of all sorts. Proposals have been submitted with the list of references omitted and with the references not referred to. Proposals have been submitted to the wrong program. Proposals have been submitted with misspellings in the title. These proposals were not successful. Stupid things like this kill a proposal. It is easy to catch them with a simple, but careful, proof reading. Don’t spend six or eight weeks writing a proposal just to kill it with stupid mistakes that are easily prevented.
.
12. Submit your proposal on time: Duh? Why work for two months on a proposal just to have it disqualified for being late? Remember, fairness dictates that proposal submission rules must apply to everyone. It is not up to the discretion of the program officer to grant you dispensation on deadlines. Get your proposal in two or three days before the deadline.



Thursday, April 20, 2017

DMLTransaction Info in oracle

For each DML operation, a transaction id is initiated. We can get the transaction id by below query:


select dbms_transaction.local_transaction_id from dual;




PAUL @ pauldb-uat > select dbms_transaction.local_transaction_id from dual;

LOCAL_TRANSACTION_ID
--------------------------------------------------------------------------------
8.10.4224


the transaction ID is a series of numbers denoting undo segment number, slot# and record# (also known as sequence#) respectively, separated by periods.


To get the details about a transaction we can use the below query:

select
    owner               object_owner,
    object_name         object_name,
    session_id          oracle_sid,
    oracle_username     db_user,
    decode(LOCKED_MODE,
        0, 'None',
        1, 'Null',
        2, 'Row Share',
        3, 'Row Exclusive',
        4, 'Share',
        5, 'Sub Share Exclusive',
        6, 'Exclusive',
        locked_mode
    )                   locked_mode
    from v$locked_object lo,
        dba_objects do
    where
        (xidusn||'.'||xidslot||'.'||xidsqn)
            = ('&transid')
    and
        do.object_id = lo.object_id;

Wednesday, April 19, 2017

Clearing ITL Slots in Oracle

For the DML operation, transaction keeps entry in the blocks containing the target rows. These entries are maintained by ITL(Interested Transaction List). Now our interest is in the question when this ITL entries will be cleared.

To answer that question, we can consider this scenario: a transaction updates 10000 records, on 10000 different blocks. Naturally there will be 10000 ITL slots, one on each block, all pointing to the same transaction ID. The transaction commits; and the locks are released. Should Oracle revisit each block and remove the ITL entry corresponding to the transaction as a part of the commit operation?

If that were the processing logic, the commit would have taken a very long time. Acquiring the buffers of the 10000 blocks and updating the ITL entry will not be quick; it will take a very long time, prolonging the commit processing. Target of the Oracle design  that the commit processing is actually very quick, with a flush of the log buffer to redo logs and the writing of the commit marker in the redo stream. Even a checkpoint to the datafiles is not done as a part of commit processing – all the effort going towards making the process fast, very fast. Had Oracle added the logic of altering ITL slots, the commit processing would have been potentially long, very long. Therefore Oracle does not remove the ITL entries after that transaction ends (by committing, or rolling back); the slots are just left behind as artifacts.

So, when does the ITL entry gets cleared? When block’s buffer is written to the disk, the unneeded ITL entries are checked and cleared out.

How ITL Slots are maintained in Block

When a transaction modifies rows, then the transaction locks the rows (since it did not commit) by placing a special type of data in the block header known as Interested Transaction List (ITL) entry. The ITL entry shows the transaction ID and other information.

Now we assume, there are 5 records in the block and a transaction updated (and therefore locked) all five of them, how many ITL entries will be used – one or five?

We think five ITL slots may be feasible; but what if the block has 10,000 records? Is it possible to have that many ITL slots in the block header? Let’s ponder on that for a second. There will be two big issues with that many ITL slots.

First, each ITL slot, by the way, is 24 bytes long. So, 10000 slots will take up 240,000 bytes or almost 22 KB. A typical Oracle block is 8KB (We know, it could be 2K, 4K or 16K; but suppose it is the default 8K). Of course it can’t accommodate 22KB.

Second, even if the total size of the ITL slots is less than the size of the block, where will be the room to hold data? In addition, there should be some space for the data block overhead; where will that space come from?

Obviously, these are genuine problems that make one ITL slot per row impractical. Therefore Oracle does not create an ITL entry for each locked row. Instead, it creates the ITL entry for each transaction, which may have updated a number of rows. Let me repeat that – each ITL slot in the block header actually refers to a transaction; not the individual rows. That is the reason why you will not find the rowid of the rows locked in the ITL slot. 

There is reference to a transaction ID; but not rowid. When a transaction wants to update a row in the block, it checks the ITL entries. If there is none, it means rows in that block are unlocked. However, if there are some ITL entries, does it mean that some rows in the block are locked? Not necessarily. It simply means that the rows the block were locked earlier; but that lock may or may not be active now. To check if a row is locked, the transaction checks for the lock byte stored along with the row.

The if the presence of an ITL slot does not mean a record in the block is locked, when does the ITL slot get cleared so that it can be reused, or when does that ITL slot disappear? Is there no effect of Commit and RollBack to clear the ITL slot.

Tuesday, April 18, 2017

ITL- Interested Transaction List

Oracle keeps note of which rows are locked by which transaction in an area at the top of each data block known as the 'interested transaction list'. The number of ITL slots in any block in an object is controlled by the INITRANS and MAXTRANS attributes. INITRANS is the number of slots initially created in a block when it is first used, while MAXTRANS places an upper bound on the number of entries allowed. Each transaction which wants to modify a block requires a slot in this 'ITL' list in the block.

If multiple transactions attempt to modify the same block, they can block each other if the following conditions are fulfilled:

- There is no free ITL ("Interested Transaction List") slot available. Oracle records the lock information right in the block and each transactions allocates an ITL entry. 

- Insufficient space in the block left to add a new ITL slot. Since each ITL entry requires a couple of bytes a new one cannot be created if the block doesn't have sufficient free space.

The INITRANS and MAXTRANS settings of a segment control the initial and maximum number of ITL slots per block. The default of INITRANS in recent Oracle releases is 1 resp. 2 for indexes and the default value for MAXTRANS is 255 since the 10g release.

The following example demonstrates the issue. A block is almost full and several transactions attempt to manipulate different rows that all reside in this block.


The ITL in the "enq: TX - allocate ITL entry" indicates error is for "Interested Transaction List", and there are several approaches to fixing this error:
1 - Increasing the value of INITRANS and/or MAXTRANS for the table and indexes.
2 - Move the table to a smaller blocksize.
3 - In some cases, you can remove the enq: TX - allocate ITL entry error for UPDATE/DELETE DML issues by reorganizing the table to increase PCTFREE for the table, thereby leaving less rows per data block.
4 - Reduce the degree of parallel DML on this table

Thursday, November 17, 2016

Sheel Script for Deleting files older than some days

find is the common tool for this kind of task :

find ./target_dir ! -name 'stayhere.txt' -mtime +5 -type f -delete


EXPLANATIONS

./target_dir your directory (replace with your own)
! -name File to exclude
-mtime +5 older than 5 days
-type f only files
-delete no surprise. Remove it to test your find filter before executing the whole command
And take care that ./target_dir exists to avoid bad surprises !

Thursday, May 26, 2016

Terminator Termina Shortcut in Ubuntu


Its very useful to use shortcut than moving cursor operate terminal. Here is the shortcut for operating Terminator terminal in Ubuntu :


Thursday, May 5, 2016

Excuting local shell script to remote Server

System admin needs to get different information from different servers for various purposes at different times. For the case of collecting informartion from many different servers, its best practise to make a script with the required query and execute that on those servers and collect the output. And that output may be needed to transfer to location PC for further annalysis.

The above task is so much time consuming for the case of different severs and gatthering the inforamtion in the local PC.  To get rid from the fatigue, we can perfrom the task with a little tricks.

Step 1: Wer can write down the scirpt with all the commands to execute on those servers.
Step 2: We can execute the script on the remote servers from local machine and forward the output to the output file.

We are focusing here the process of executing the local script to remote servers.

Suppose, we have a script[dbhostcheck.sh] as below for checking the system information:

#! /bin/bash

echo "Host Name:"|hostname

echo
echo "||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||"
echo "Disk Checking"
echo "||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||"
df

echo
echo "||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||"
echo "CPU Utilization"
echo "||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||"
sar 1 10

echo
echo "||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||"
echo "Memory Utilization"
echo "||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||"

vmstat 1 10

NO we want to execute this script on a remote server. For this purpose we can excute it with the below command:

ssh user_name@name_of_host "/usr/local/bin/bash -s" -- <./dbhostcheck.sh >output.txt

Through the ssh user_name@name_of_host we are remotely accessing the host
"/usr/local/bin/bash -s": we are making the the bash for collecting standard output.-- <./dbhostcheck.sh:  Providing script information to the bash
 >output.txt redirecting the output to the file names  output.txt






Finding jobs currently running or history about the jobs

SELECT job_name, session_id, running_instance, elapsed_time, cpu_used FROM dba_scheduler_running_jobs;
Also one can use the following view to find the history details of job that has run.
SELECT job_name, log_date, status, actual_start_date, run_duration, cpu_used FROM dba_scheduler_job_run_details;
To find the jobs that haven’t succeeded
SELECT job_name, log_date, status, actual_start_date, run_duration, cpu_used FROM dba_scheduler_job_run_details where status ‘SUCCEEDED’;

Wednesday, May 4, 2016

ORA-01720: grant option does not exist for a table

We need users of specific role, need select privileges on tables and views owned by schema owners.


However while We try to grant select privileges on some views, I come accross a somewhat particular error as:



paul @ agentdb-live > grant select on AGENTBIP.VIEW_CUST_AGENT_DIST_INFO to ROLE_ITDD_OT;
grant select on AGENTBIP.VIEW_CUST_AGENT_DIST_INFO to ROLE_ITDD_OT
                         *
ERROR at line 1:
ORA-01720: grant option does not exist for 'MMUSER_GW.DBBL_AGENT_DATA'


The reason for it is that the view VIEW_CUST_AGENT_DIST_INFO owned by AGENTBIP is built on top of the table DBBL_AGENT_DATA owned by someone else ie MMUSER_GW.

AGENTBIP cannot give privileges on these kind of views to someone else ie USER_1 as long as OWNER_VIEW has not the privileges WITH GRANT OPTION for the underlying tables

The solution

paul @ agentdb-live > Grant select on MMUSER_GW.DBBL_CONSUMER_ACCOUNT to AGENTBIP with grant option;

Grant succeeded.

SYSTEM @ agentdb-live > grant select on AGENTBIP.VIEW_CUST_AGENT_DIST_INFO to ROLE_ITDD_OT;

Grant succeeded.

Tuesday, April 26, 2016

Find and kill a process in one line using bash

For operation and other pusposes, we need to kill process in linux system. For this purpose, at first we find the process id then we kill the process with the process id. i.e

[agentrpt@localhost ~]$ ps -ef | grep 'jar'
agentrpt 13131     1  0 16:49 ?        00:00:00 java -jar ABS_ALERT_MIDDLEWARE.jar
agentrpt 13278     1  0 16:54 ?        00:00:00 java -jar SMSMailClient.jar
agentrpt 13402 13367  0 16:58 pts/0    00:00:00 grep jar

For killing process of java -jar SMSMailClient.jar, We need to executge command as:
kill 13278

We can do it in a simple way. For the using the script regulary we can create a script with the below command.

In bash, you should be able to do:

kill $(ps aux | grep '[SMS]MailClient.jar' | awk '{print $2}')
Details on its workings are as follows:

The ps gives you the list of all the processes.
The grep filters that based on your search string, [SMS] is a trick to stop you picking up the actual grep process itself.
The awk just gives you the second field of each line, which is the PID.

Thursday, April 21, 2016

Remove all files/directories except for one file

find . ! -name 'filetoexist.txt' -type f -exec rm -f {} +

will remove all files except filetoexist.txt. To remove directories, change -type f to -type d and add -r option to rm.

To exclude list of files: ! \( -name one_file -o -name two_file \)

In bash, to use rm !(file.txt), we will have to enable extglob:

$ shopt -s extglob
$ rm !(file.txt)

Note that extglob only works in bash and Korn shell family. And using rm !(file.txt) can cause an Argument list too long error.

In zsh, you can use ^ to negate pattern with extendedglob enabled:

$ setopt extendedglob
$ rm ^file.txt

or using the same syntax with ksh and bash with options ksh_glob and no_bare_glob_qual enabled.

Wednesday, April 20, 2016

To get IP Address of Sun Server

For a normal user (i.e., not 'root') ifconfig isn't in his path, but it's the command.
More specifically: /usr/sbin/ifconfig -a
If we want the IP Address easily, we need some scripting as:
/usr/sbin/ifconfig -a | awk 'BEGIN { count=0; } { if ( $1 ~ /inet/ ) { count++; if( count==2 ) { print $2; } } }'

Monday, April 18, 2016

Modifying file using Vim and awk

In some case, we have to change a big file with the same type of text. We can use vim editor for doing this type of task. Suppose, we have a files as below:


Now for a need, we need to generate delete script from this file contents as below:


For doing so, 

1. we have opened the file using vim editor. vim loglistws.txt
2.Then we have used the below commnads to add double quate around the words : 
   :%s/EJB_Str/"EJB_Str/g
   
Then save the file using :w
   :%s/\.log/\.log"/g

3. Now save the file using :wq!
4. Now we have made a new file form the existing file as:
    cat loglistws.txt | awk '{print "rm ",$9,$10,$11,$12}' > myscriptws.sh

Sunday, April 17, 2016

Comparing between two packages

Many times we need to compare between different packages, wars, jars to find out the specific differences. We can use different tools for this purpose i.e pkgdiff, japi-complicance-checker, clirr. Here the process of finding differences has been depicted using the pkgdiff.

1. Download the zipped file from: https://github.com/lvc/pkgdiff/archive/1.7.2.tar.gz
2. Untar the files using the command: tar xvfz somefilename.tar.gz
3. Open the unzip folder.  in this folder make script is available.
4. Install the file using the command:  sudo make install prefix=/usr
5. Now compare the files using:  pkgdiff filename.jar filename_new.jar
6. The above command will provide the result.
7. A report will also be generated in html. We can get the details report from that file.

Wednesday, March 30, 2016

Archive Redo Log

An Oracle database can run in one of two modes. By default, the database is created in NOARCHIVELOG mode. Oracle Database lets you save filled groups of redo log files to one or more offline destinations using its ARCH process, known collectively as the archived redo log, or more simply the archive log. The process of turning redo log files into archived redo log files is called archiving.

When in NOARCHIVELOG mode the database runs normally, but there is no capacity to perform any type of point in time recovery operations or online backups. Thus, you have to shutdown the database to back it up, and when you recover the database you can only recover it to the point of the last backup. While this might be fine for a development environment, the big corporate types tend to frown when a weeks worth of current production accounting data is lost forever. We can check the status of the archiving mode of a database suing hte following query:

SQL> archive log list;
SQL> select log_mode from v$database;
we can also with the below commnad to find the process:
$ ps -ef|grep -i _arc

Wednesday, August 5, 2015

Parameters File in Oracle

A Parameter is a key Value Pair
i.e db_block_size, db_name each contain a value in a files.

Types of parameter:

tns.ora parameter files: There are some parameters related to network. This file maps the net services to connect with network.

Listener.ora Parameter: is a file tells on which port to be connected for accessing DB.

Database parameters: its known as init.ora. It consists the parameters that required to start the database. Normally init.ora file reside with the name concating with the Site Identifier Name(SID). If the database SID is XE. then the init file name is initXE.ora. If we check this file we will get some parameters which suggest how the database will be configured.[Its like the blueprint of a home]. It contains some default parameters. But there parameters are the basic parameters must be in this file is Control File, DB_BLOCK_SIZE, DB_NAME. As this file is needed for the start of the database, we can use it as:

startup pfile=initXE.ora


We can change the name of the init.ora file for different types of requirements. Sometimes it makes confusion for multiple existence of the same file. Another problem is as this file is editable using text editor, for any unwanted mistake file may not be  updated with the correct value. Then for the corrupted file, DB wont start up properly. To get rid from such type of problem, starting from the Oracle 9i, Oracle introduces another file called SP file[System Parameter]. It may reside only one copy and another things its a binary file. And you are not able to change it manually. Only oracle can modify it. and for modifying it we next execute command for it. All the parameters are saved in a dynamic performance table named v$parameter. To get the value of a parameter, after connecting to DB through sqlplus, we can execute


desc v$parameter;

in this file we will get more than 250 parameters.

Select value from v$parameter where name='sql_trace'; || or || show parameter sql_trace


We can modify the value of parameter. And for the effecting it we can follow two steps:

alter system set sql_trace=true scope=memory;

using the above query, we are changing the parameter value until the running the database.

scope memory: until the instance will be shutdown
scope sp file: it will preserve the value in the SP file. It will be effected after restartigng the database
scope both: it will effect the change immediately in all places


In two ways we can get back after corruption of SP file. By using the unix command strings, we can get all those key value pairs those can be stored init.ora file. We can start database through it. And after that we can create SP file from it.

Whenever a database start all the non default parameter are put into a file named alert.log From that file we can check those. From that file we can create a (anyname.)ora file, and after starting using this file we can create SP fiel from the init file.

Tuesday, August 4, 2015

File system in Oracle Database

Different types of database files.

1.Parameter Files:  It tells how the instance is configured that how big the SGA. at the start up data base, it uses the parameter file for  configurations. It also tells how my Database process in the DB writer.

2. Data files: This stores all of the data of user. Such as tables, table data are being consisted by Data files. Its a operating system file. Each database must contain at least one datafile.

3. Redo log files:  Its the transaction log of database. When any operation is being performed at database, then a transaction is being created at Redo log files. Using Redo log files we can recover the Database for instance or media failure. It contains all the information for revert back of database.

4. Control Files: It tells the instance where the datafiles and redo log files exist. Instance read the control files information to locate where those files. For the importance of its role, 4 copy of control files are saved. Its called Multiplexing of control files.

5.Temp Files:  For
 the temporary requirement of some operation, instance need some temporary storage. Temp files arrange that space. i.e for order by query processing purpose, it might be needed.


6. Password Files: its used to authenticate users performing administrative [Start up/ Shut down Database]task over network.

7. Trace Files/Alert Log: There are many background process running behind the database. For various purpose to get the status of those process, we can take help of trace files. As for unexpected error or failure, trace file contain the status of those services.

Tuesday, June 16, 2015

Concurrency &. Parallelism

For the case of multi threaded programs we often use these terms, concurrency and parallelism.

Multiple tasks are in progress at the same time refers to concurrency. An application may process one task at at time (sequentially) or work on multiple tasks at the same time (concurrently).

Each task is broken in to sub tasks which can be processed in parallel. It is related to how an application handles each individual task. An application may process the task serially from start to end, or split the task up into subtasks which can be completed in parallel.

  • An application can be concurrent, but not parallel. This means that it processes more than one task at the same time, but the tasks are not broken down into subtasks.
  • An application can also be parallel but not concurrent. This means that the application only works on one task at a time, and this task is broken down into subtasks which can be processed in parallel.
  • Additionally, an application can be neither concurrent nor parallel. This means that it works on only one task at a time, and the task is never broken down into subtasks for parallel execution.
  • Finally, an application can also be both concurrent and parallel, in that it both works on multiple tasks at the same time, and also breaks each task down into subtasks for parallel execution. However, some of the benefits of concurrency and parallelism may be lost in this scenario, as the CPUs in the computer are already kept reasonably busy with either concurrency or parallelism alone. Combining it may lead to only a small performance gain or even performance loss. We should analyze and measure deeply before we adopt a concurrent parallel model blindly.