Tuesday, July 11, 2017

Oracle Internal on Controlfile



The database control file is a small binary file necessary for the database to start and operate successfully. A control file is updated continually by Oracle during database use, so it must be available for writing whenever the database is open. If for some reason the control file is not accessible, then the database cannot function properly. Each control file is associated with only one Oracle database.

The control file serves the following purposes:
  • It contains information about data files, online redo log files, and other files that are required to open the database.
  • The control file tracks structural changes to the database. For example, when an administrator adds, renames, or drops a data file or online redo log file, the database updates the control file to reflect this change.
  • It contains metadata that must be accessible when the database is not open.
Controlfile Contents
  • The database name
  • The timestamp of database creation
  • The names and locations of associated datafiles and redo log files
  • Tablespace information
  • Datafile offline ranges
  • The log history
  • Archived log information
  • Backup set and backup piece information
  • Backup datafile and redo log information
  • Datafile copy information
  • The current log sequence number
  • Checkpoint information
Controlfile Structure
Circular reuse records
      OFFLINE RANGE
      ARCHIVED LOG
      BACKUP SET
      BACKUP PIECE
      BACKUP DATAFILE
      BACKUP REDOLOG
      DATAFILE COPY
      BACKUP CORRUPTION
      COPY CORRUPTION
      DELETED OBJECT
      PROXY COPY
Noncircular reuse records
      CKPT PROGRESS (Checkpoint progress)
      REDO THREAD, REDO LOG (Logfile)
      DATAFILE (Database File)
      FILENAME (Datafile Name)
      TABLESPACE
      TEMPORARY FILENAME
      RMAN CONFIGURATION
CONTROL_FILES
CONTROL_FILE_RECORD_KEEP_TIME
  • It is not mandatory that all the records will be reused after days specified for CONTROL_FILE_RECORD_KEEP_TIME. It is the MINIMUM number of days that the details are retained.
  • This parameter applies only to records in the control file that are circularly reusable (such as archive log records and various backup records). It does not apply to records such as datafile, tablespace, and redo thread records, which are never reused unless the corresponding object is dropped from the tablespace.
Controlfile Creation
  • All control files for the database have been permanently damaged and you do not have a control file backup.
  • You want to change the database name. For example, you would change a database name if it conflicted with another database name in a distributed environment.
  • You want to change the parameters from the CREATE DATABASE or CREATE CONTROLFILE commands: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, and MAXINSTANCES, when the compatibility is earlier than 10.2.0. If compatibility is 10.2.0 or later, you do not have to create new control files when you make such a change; the control files automatically expand, if necessary, to accommodate the new configuration information.
Checking for Missing or Extra Files
Handling Errors during CREATE CONTROLFILE
ControlFile Expansion
  • Backups that you perform
  • Archived redo logs that the database generates
  • Number of Days that this information is stored in the control file 
Expanded controlfile section 11 from 28 to 109 records
Requested to grow by 81 records; added 3 blocks of records
RMAN> CROSSCHECK archivelog all;
RMAN> LIST EXPIRED archivelog all;
RMAN> DELETE EXPIRED archivelog all;
Controlfile Enqueue
  • Redo log Switch
  • Redo log Archival
  • BEGIN / END BACKUP
  • Checkpoint
  • Performing Crash Recovery
  • ORA-00600: internal error code, arguments: [2103]   (10.2.0.3 and Earlier)
  • ORA-00494: enqueue [CF] held for too long (more than 900 seconds) by 'inst , ospid ' (10.2.0.4 and later)
When the waiting process is unable to kill the holding process, then an ORA-239 error is raised.
Common causes for CF Enqueue Timeout are
  • Very slow I/O subsystem where the Control files are stored. 
  • Frequent log switching - redo logs too small or insufficient logs 
  • Async I/O issue or multiple db_writers, you can't use both of them, back out one.
  • OS / Hardware issues
Guidelines for Controlfiles
Provide Filenames for the Control Files
  • If you are not using Oracle Managed Files, then the database creates a control file and uses a default filename. The default name is operating system specific.
  • If you are using Oracle Managed Files, then the initialization parameters you set to enable that feature determine the name and location of the control files.
  • If you are using Oracle Automatic Storage Management (Oracle ASM), you can place incomplete Oracle ASM filenames in the DB_CREATE_FILE_DEST and DB_RECOVERY_FILE_DEST initialization parameters. Oracle ASM then automatically creates control files in the appropriate places.
Multiplex Control Files on Different Disks
  • The database writes to all filenames listed for the initialization parameter CONTROL_FILES in the database initialization parameter file.
  • The database reads only the first file listed in the CONTROL_FILES parameter during database operation.
  • If any of the control files become unavailable during database operation, the instance becomes inoperable and should be aborted.
Back Up Control Files
  • Adding, dropping, or renaming datafiles
  • Adding or dropping a tablespace, or altering the read/write state of the tablespace
  • Adding or dropping redo log files or groups
Dropping Controlfiles
  • Shut down the database.
  • Edit the CONTROL_FILES parameter in the database initialization parameter file to delete the old control file name.
  • Restart the database.
This operation does not physically delete the unwanted control file from the disk. Use operating system commands to delete the unnecessary file after you have dropped the control file from the database.
Controlfile Structure Internals
Controlfile Transactions
Controlfile Resizing
Controlfile Dumps
Dump LevelDump Contains
1only the file header
2just the file header, the database info record, and checkpoint progress records
3all record types, but just the earliest and latest records for circular reuse record types
4as above, but includes the 4 most recent records for circular reuse record types
5+as above, but the number of circular reuse records included doubles with each level
oradebug setmypid
oradebug dump controlf 3
alter session set events 'immediate trace name controlf level 3';


Wait event: refresh controlfile command
SQL> select dest, description from x$messages where description like 'refresh %';

DEST DESCRIPTION
---- ----------------------------------------
CKPT refresh control file
Oracle Control File Read & Write Wait Events
>SELECT name FROM v$controlfile;
SELECT * FROM v$system_event
WHERE event LIKE '%control%
SELECT event, wait_time, p1, p2, p3
FROM v$session_wait WHERE event LIKE '%control%';
ALTER SESSION set events 'immediate trace name control level ';

A Control file contains information such as
The database name and timestamp originate from the database creation. The database name is taken from either the name specified by the DB_NAME initialization parameter or the name used in the CREATE DATABASE statement.
Each time that a datafile or a redo log file is added to, renamed in, or dropped from the database, the control file is updated to reflect this physical structure change. These changes are recorded so that:
Oracle can identify the datafiles and redo log files to open during database startup
Oracle can identify files that are required or available in case database recovery is necessary
Therefore, if you make a change to the physical structure of your database (using ALTER DATABASE statements), then you should immediately make a backup of your control file.

Information about the database is stored in different sections of the control file. Each section is a set of records about an aspect of the database. For example, one section in the control file tracks data files and contains a set of records, one for each data file. Each section is stored in multiple logical control file blocks. Records can span blocks within a section.
The control file contains the following types of records:

These records contain noncritical information that is eligible to be overwritten if needed. When all available record slots are full, the database either expands the control file to make room for a new record or overwrites the oldest record. Examples include records about:
      LOG HISTORY

These records contain critical information that does not change often and cannot be overwritten. Examples of information include tablespaces, data files, online redo log files, and redo threads. Oracle Database never reuses these records unless the corresponding object is dropped from the tablespace. Examples of non-circular controlfile sections (the ones that can only expand)
      DATABASE (info)
Reading and writing the control file blocks is different from reading and writing data blocks. For the control file, Oracle Database reads and writes directly from the disk to the program global area (PGA). Each process allocates a certain amount of its PGA memory for control file blocks.

This parameter is specified in pfile/spfile. This parameter is used to specify the location of controlfiles. CONTROL_FILES specifies one or more names of control files, separated by commas. A instance uses this parameter to locate the controlfiles for the database startup.

CONTROL_FILE_RECORD_KEEP_TIME specifies the minimum number of days before a reusable record in the control file can be reused. In the event a new record needs to be added to a reusable section and the oldest record has not aged enough, the record section expands. If this parameter is set to 0, then reusable sections never expand, and records are reused as needed.
Note:

It is necessary for you to create new control files in the following situations:

After creating a new control file and using it to open the database, check the alert log to see if the database has detected inconsistencies between the data dictionary and the control file, such as a datafile in the data dictionary that the control file does not list.
If a datafile exists in the data dictionary but not in the new control file, the database creates a placeholder entry in the control file under the name MISSINGnnnn, where nnnn is the file number in decimal. MISSINGnnnn is flagged in the control file as being offline and requiring media recovery.
If the actual datafile corresponding to MISSINGnnnn is read-only or offline normal, then you can make the datafile accessible by renaming MISSINGnnnn to the name of the actual datafile. If MISSINGnnnn corresponds to a datafile that was not read-only or offline normal, then you cannot use the rename operation to make the datafile accessible, because the datafile requires media recovery that is precluded by the results of RESETLOGS. In this case, you must drop the tablespace containing the datafile.
Conversely, if a datafile listed in the control file is not present in the data dictionary, then the database removes references to it from the new control file. In both cases, the database includes an explanatory message in the alert log to let you know what was found.

If Oracle Database sends you an error (usually error ORA-01173, ORA-01176, ORA-01177, ORA-01215, or ORA-01216) when you attempt to mount and open the database after creating a new control file, the most likely cause is that you omitted a file from the CREATE CONTROLFILE statement or included one that should not have been listed. In this case, you should restore the missing files and recreate the controlfile

ControlFile expansion can occur in both circular and non circular sections.
Non-circular reuse (e.g. datafile records).
If there are no empty slots for a datafile, and you add a new datafile to the database, in oracle 8, the datafile record section will expand to allow the new file to be recorded in the controlfile. Non-circular reuse records cannot be overwritten until the object it describes is removed from the database. 
Circular reuse (e.g. archivelog records)
Circular reuse records can be overwritten. Typically, the oldest record will be overwritten whenever there is a need to make room for a new entry, e.g. when a new archivelog is generated and there is no more space in the archivelog section of the controlfile, the oldest archivelog entry will be overwritten by the new archivelog entry. When the CONTROL_FILE_RECORD_KEEP_TIME is set to 0, then the Circular reuse sections cannot expand and are only reused.
The size of the database control file growth may depend on the number of:
and many other factors each dependent on corresponding controlfile section.
The Database alert log may contain the entries similar to this.
Tue Apr 19 16:44:52 2011
The above alert log message can be seen when we the archivelog section (section 11) in controlfile expands.This section could expand at times when there is too heavy log switching in the database.

Controlfile enqueue is a critical resource that is obtained by a process which performs a controlfile transaction. An update to the controlfile is termed as controlfile transaction.
Few situations when CF Enqueue is acquired are 
When any process holds the CF enqueue for long time, the other processes which need to perform the Controlfile transaction will be waiting to acquire the enqueue. Holding the enqueue for very long time can lead to database hang. Hence there is a timeout set for holding the controlfile enqueue. The timeout is 900 secs (15 min). If the process exceeds this timeout, then the holder process is killed by killing the session of holding process. The waiting process normally kills the holding process which exceeds the timeout. Then the error is logged in the alert log.

You specify control file names using the CONTROL_FILES initialization parameter in the database initialization parameter file (see "Creating Initial Control Files"). The instance recognizes and opens all the listed files during startup, and the instance writes to and maintains all listed control files during database operation.
In case you do not specify files for CONTROL_FILES before database creation:

Every Oracle Database should have at least two control files, each stored on a different physical disk. If a control file is damaged due to a disk failure, the associated instance must be shut down. Once the disk drive is repaired, the damaged control file can be restored using the intact copy of the control file from the other disk and the instance can be restarted. In this case, no media recovery is required.
The behavior of multiplexed control files is this:
One way to multiplex control files is to store a control file copy on every disk drive that stores members of redo log groups, if the redo log is multiplexed. By storing control files in these locations, you minimize the risk that all control files and all groups of the redo log will be lost in a single disk failure.

It is very important that you back up your control files. This is true initially, and every time you change the physical structure of your database. Such structural changes include:

You want to drop control files from the database, for example, if the location of a control file is no longer appropriate. Remember that the database should have at least two control files at all times.
Note:

The first block of the controlfile is a header block that records just the controlfile block size and the number of blocks in the controlfile. The controlfile block size is the same as the database block size. When mounting a database, Oracle checks that the controlfile block size and the file size recorded in the controlfile header block match the db_block_size parameter and the file size reported by the operating system (if available). Otherwise an error is raised to indicate that the controlfile might have been corrupted or truncated.
After the header block, all controlfile blocks occur in pairs. Each logical block is represented by two physical blocks. This is necessary for the controlfile transaction mechanism.
It is theoretically possible that a hot backup of a controlfile could contain a split block. Therefore all controlfile blocks other than the file header have a cache header and tail that can be compared when mounting a database and whenever a controlfile block is read. The block type is 0 for virgin controlfile blocks and 21 otherwise. The physical controlfile block number is used in place of an RDBA in the cache header, and a controlfile sequence number is used in place of an SCN to record when the block was last changed. An ORA-00227 error is returned if the header and tail do not match, or if the block checksum does not match the checksum recorded in the cache header (if any).
The controlfile contains several different types of records, each in its own record section of one or more logical blocks. Records may span block boundaries within their section. The fixed view V$CONTROLFILE_RECORD_SECTION lists the types of records stored in each record section, together with the size of the record type, and the number of record slots available and in use within that section. The underlying X$KCCRS structure includes the starting logical block number (RSLBN) for each section.

Sessions must hold an exclusive lock on the CF enqueue for the duration of controlfile transactions. This prevents concurrent controlfile transactions, and in-flux controlfile reads, because a shared lock on the CF enqueue is needed for controlfile reads. However, there is also a need for recoverability should a process, instance or system failure occur during a controlfile transaction.
For the first record section of the controlfile, the database information entry section, this requirement is trivial, because the database information entry only takes about 210 bytes and is therefore guaranteed to always fit into a single controlfile block that can be written atomically. Therefore changes to the database entry can be implicitly committed as they are written, without any recoverability concerns.
Recoverability for changes to the other controlfile records sections is provided by maintaining all the information in duplicate. Each logical block is represented by two physical blocks. One contains the current information, and the other contains either an old copy of the information, or a pending version that is yet to be committed. To keep track of which physical copy of each logical block contains the current information, Oracle maintains a block version bitmap with the database information entry in the first record section of the controlfile.
To read information from the controlfile, a session must first read the block version bitmap to determine which physical block to read. Then if a change must be made to the logical block, the change is first written to the alternate physical block for that logical block, and then committed by atomically rewriting the block containing the block version bitmap with the bit representing that logical block flipped. When changes need to be made to multiple records in the same controlfile block, such as when updating the checkpoint SCN in all online datafiles, those changes are buffered and then written together. Note that each controlfile transaction requires at least 4 serial I/O operations against the controlfile, and possibly more if multiple blocks are affected, or if the controlfile is multiplexed and asynchronous I/O is not available. So controlfile transactions are potentially expensive in terms of I/O latency.
Whenever a controlfile transaction is committed, the controlfile sequence number is incremented. This number is recorded with the block version bitmap and database information entry in the first record section of the controlfile. It is used in the cache header of each controlfile block in place of an SCN to detect possible split blocks from hot backups. It is also used in queries that perform multiple controlfile reads to ensure that a consistent snapshot of the controlfile has been seen. If not, an ORA-00235 error is returned.



The controlfile transaction mechanism is not used for updates to the checkpoint heartbeat. Instead the size of the checkpoint progress record is overstated as half of the available space in a controlfile block, so that one physical block is allocated to the checkpoint progress record section per thread. Then, instead of using pairs of physical blocks to represent each logical block, each checkpoint progress record is maintained in its own physical block so that checkpoint heartbeat writes can be performed and committed atomically without affecting any other data.

The slots in some control file record sections can be reused circularly. The most obvious examples are the log history, archived logs and offline ranges, but the various backup related record types are also cyclically reusable.
The control_file_record_keep_time parameter sets the minimum number of days that must have elapsed before a reusable controlfile record slot can be reused. The default is 7 days. If all the slots in a record section are in use and that number of days has not yet elapsed since the timestamp on the earliest entry, then Oracle will dynamically expand the record section (and thus the controlfile too) to make more slots available, up to a maximum of 65535 slots per section, or the controlfile size limit. (The controlfile size limit is based on the number of blocks that can be represented in the block version bitmap, and is thus most unlikely to be reached.) Informational "kccrsz" messages about the dynamic expansion of the controlfile (or the failure to do so) may be seen in the alert.log.
There are V$ views for each reusable controlfile record section, each with a timestamp column. These views can be used to estimate the number of controlfile slots that would be required in each record section for a particular keep time setting. The control_file_record_keep_time parameter can also be set to zero to prevent keep time related controlfile expansion. However, dynamic controlfile expansion may nevertheless be required for the non-reusable record sections. For example, the controlfile may grow when creating a tablespace if either of the datafile or tablespace record sections are already full.
Controlfile resizing happens in a special controlfile transaction under the protection of the CF enqueue. An instance or system failure during the resizing of the controlfile can result in its corruption. Therefore controlfile backups are, as always, important.

The contents of the current controlfile can be dumped in text form to a process trace file in the user_dump_dest directory using the CONTROLF dump. The levels for this dump are as follows.
For example, the following syntax could be used to get a text dump on the controlfile in the trace file of the current process showing all the controlfile record types but only the oldest and most recent of the circular reuse records.
Of course, the session must be connected AS SYSDBA to use the ORADEBUG facility. However, any session with the ALTER SESSION privilege can use the following event syntax to take the same dump.

The refresh controlfile command wait event is not important for tuning, but can be helpful in problem diagnosis. It occurs when mounting a database, and when a controlfile transaction changes the range of used record numbers in any controlfile record section. This information is cached in the SGA and must be updated in all instances whenever it is changed by a controlfile transaction. Accordingly, a refresh controlfile command is sent to the CKPT process in each instance causing it to refresh the cached controlfile information in the SGA from the physical controlfile. While this is being done, the requesting process waits on the refresh controlfile command wait event.The inter-process communication message that is used can be seen in X$MESSAGES as follows.


The three different wait events of ‘control file sequential read’, ‘control file single write’, and ‘control file parallel write’ all contribute to the amount of time Oracle takes to keep the control file current.
Oracle maintains a record of the consistency of the database’s physical structures and operational state through a set of control files. The Oracle control file is essential to the database operation and ability to recover from an outage. In fact, if you lose the control file(s) associated with an instance you may not be able to recover completely.
It is the Oracle control file(s) that records information about the consistency of a database’s physical structures and operational statuses. The database state changes through activities such as adding data files, altering the size or location of datafiles, redo being generated, archive logs being created, backups being taken, SCN numbers changing, or checkpoints being taken.
Through normal operation the control file is continuously hammered with reads and writes as it is being updated.
Why control file wait events occur
The performance around reads and writes against control files is often an indication of misplaced control files that share the same I/O access path or are on devices that are heavily used. It is interesting to note that Oracle has always defaulted the creation of control files in a single directory. You can check where your control files reside on disk with this simple query.
If you wanted to check the amount of total system waits that have occurred for control file reads and writes you could do so by querying the V$SYSTEM_EVENT view. This will give you the total number of waits, timeouts, and accumulated time for this event.
Likewise you could query the V$SESSION_WAIT view to see which sessions are experiencing control file wait events in real time.
Here WAIT_TIME is the elapsed time for control file reads or writes. P1, P2, & P3 is either file#, block#, and blocks for ‘control file sequential read’ and ‘control file single write’ but is files, blocks, and requests for ‘control file parallel write’. Since there are no Oracle internal views for looking at control file layout like there is for ‘normal’ data and temporary files, you can only accomplish a look into the control files by generating a dump. You can do this through the following ALTER SESSION command where is typically from 1 to 3 and represents dumping the file header, database & checkpoint records, and circular reuse record types. It is here in the trace file, which is generated in user_dump_dest, you can see that control file(s) have a file# of 0.
Reducing time spent for control file reads and writes
So how can you reduce the time spent for control file reads and writes? There are two distinct answers to this problem. First, you can ensure that you have placed your control files on disks that are not under excessive heavy loads.
When trying to determine how many control files to have, it is best to keep in mind that the more control files you have, the more I/O and time will be needed to write to all those copies. If is often better to have the O/S mirror the control files and reduce Oracle overhead.
Second, since the number of reads and writes are dictated by the type of activity within the database, it is often a good idea to revisit application logic and processing steps to ensure that excessive activities are not causing excessive reads and writes to the control files. For instance, code that produces excessive commits and even log switches. Since DBA activity is typically concentrated on modifying structures in the database, you need to be careful when performing batch runs of administrative scripts that could conflict with normal application processing. So be on the lookout for high activity levels such as log switching, checkpointing, backups taking place, and structural changes.
The control files are vital to database operations. Since they hold such vital information about the database it is imperative that they are safeguarded against disk I/O contention while making sure they are protected against corruption or loss.
Balancing these two is often difficult but the common practice is to have multiple copies of the control files, keep them on separate physical disks, and not putting them on heavily accessed disks.
Also consider using operating system mirroring instead of Oracle multiplexing of the control files, taking and verify your control file backups daily, and create a new backup control file whenever a structural change is made.
By taking control of your control files you can eliminate I/O contention and steer clear of becoming another war story on the net about yet another lost control file and unrecoverable database system.

Saturday, February 25, 2017

Performance Tuning interview Question


What is proactive tuning and reactive tuning?
In Proactive Tuning, the application designers can then determine which combination of system resources and available Oracle features best meet the needs during design and development.In reactive tuning the bottom up approach is used to find and fix the bottlenecks. The goal is to make Oracle run faster.
Describe the level of tuning in oracle?
System-level tuning involves the following steps:
-Monitoring the operating system counters using a tool such as top, gtop, and GKrellM or the VTune analyzer’s counter monitor data collector for applications running on Windows.
-Interpreting the counter data to locate system-level performance bottlenecks and opportunities for improving the way your application interacts with the system.
-SQL-level tuning:Tuning disk and network I/O subsystem to optimize the I/O time, network packet size and dispatching frequency is called the server kernel optimization.
Distribution of data can be studied by the optimizer by collecting and storing optimizer statistics. This enables intelligent execution plans. Choice of db_block_size, db_cache_size, and OS parameters (db_file_multiblock_read_count, cpu_count, &c), can influence SQL performance. Tuning SQL Access workload with physical indexes and materialized views.
What is Database design level tuning?
The steps involved in database design level tuning are:
Determination of the data needed by an application (what relations are important, their attributes and structuring the data to best meet the performance goals)
Analysis of data followed by normalization to eliminate data redundancy.
Avoiding data contention.
Localizing access to the data to the partition, process and instance levels.
Using synchronization points in Oracle Parallel Server.
Implementation of 8i enhancements that can help avoid contention are:
Consideration on partitioning the data
Consideration over using local or global indexes.
Explain rule-based optimizer and cost-based optimizer.?
Oracle decides how to retrieve the necessary data whenever a valid SQL statement is processed.This decision can be made using one of two methods:
Rule Based Optimizer
If the server has no internal statistics relating to the objects referenced by the statement then the RBO method is used.This method will be deprecated in the future releases of oracle.
Cost Based Optimizer
The CBO method is used if internal statistics are present.The CBO checks several possible execution plans and selects the one with the lowest cost based on the system resources.
What are object datatypes? Explain the use of object datatypes.?
Object data types are user defined data types. Both column and row can represent an object type. Object types instance can be stored in the database. Object datatypes make it easier to work with complex data, such as images, audio, and video. Object types provide higher-level ways to organize and access data in the database.The SQL attributes of Select into clause, i.e. SQL % Not found, SQL % found, SQL % Isopen, SQL %Rowcount.
1.% Not found: True if no rows returned
E.g. If SQL%NOTFOUND then return some_value
2.% found: True if at least one or more rows returned
E.g. If SQL%FOUND then return some_value
3.%Isopen: True if the SQL cursor is open. Will always be false, because the database opens and closes the implicit cursor used to retrieve the data
4.%Rowcount: Number of rows returned. Equals 0 if no rows were found (but the exception is raised) and a 1, if one or more rows are found (if more than one an exception is raised).
What is translate and decode in oracle?
Translate: translate function replaces a sequence of characters in a string with another set of characters. The replacement is done single character at a time.
Syntax:
translate( string1, string_to_replace, replacement_string )
Example:
translate (‘1tech23’, ‘123’, ‘456);
Decode: The DECODE function compares one expression to one or more other expressions and, when the base expression is equal to a search expression, it returns the corresponding result expression; or, when no match is found, returns the default expression when it is specified, or NA when it is not.
Syntax:
DECODE (expr , search, result [, search , result]… [, default])
Example:
SELECT employee_name, decode(employee_id, 10000, ‘tom’, 10001, ‘peter’, 10002, ‘jack’ ‘Gateway’) result FROM employee;
What is oracle correlated sub-queries? Explain with an example?
A query which uses values from the outer query is called as a correlated sub query. The subquery is executed once and uses the results for all the evaluations in the outer query.Example:
Here, the sub query references the employee_id in outer query. The value of the employee_id changes by row of the outer query, so the database must rerun the subquery for each row comparison. The outer query knows nothing about the inner query except its results.
select employee_id, appraisal_id, appraisal_amount From employee
where
appraisal_amount < (select max(appraisal_amount)
from employee e
where employee_id = e. employee_id);
Explain union and intersect with examples?
-UNION: The UNION operator is used to combine the result-set of two or more SELECT statements Tables of both the select statement must have the same number of columns with similar data types. It eliminates duplicates.
Syntax:
SELECT column_name(s) FROM table_name1
UNION
SELECT column_name(s) FROM table_name2
Example:
SELECT emp_Name FROM Employees_india
UNION
SELECT emp_Name FROM Employees_USA
-INTERSECT allows combining results of two or more select queries. If a record exists in one query and not in the other, it will be omitted from the INTERSECT results.
What is difference between open_form and call_form? What is new_form built-in in oracle form?
Open_form opens the indicated form. Call_form not just opens the indicated form, but also keeps the parent form alive.When new_form is called, the new indicted form is opened and the old one is exited by releasing the memory. The new form is run using the same Run form options as the parent form.
What is the difference between DBFile Sequential and Scattered Reads?
Both “db file sequential read” and “db file scattered read” events signify time waited for I/O read requests to complete. Time is reported in 100’s of a second for Oracle 8i releases and below, and 1000’s of a second for Oracle 9i and above. Most people confuse these events with each other as they think of how data is read from disk. Instead they should think of how data is read into the SGA buffer cache.
db file sequential read: 
A sequential read operation reads data into contiguous memory (usually a single-block read with p3=1, but can be multiple blocks). Single block I/Os are usually the result of using indexes. This event is also used for rebuilding the controlfile and reading datafile headers (P2=1). In general, this event is indicative of disk contention on index reads.
db file scattered read: 
Similar to db file sequential reads, except that the session is reading multiple data blocks and scatters them into different discontinuous buffers in the SGA. This statistic is NORMALLY indicating disk contention on full table scans. Rarely, data from full table scans could be fitted into a contiguous buffer area, these waits would then show up as sequential reads instead of scattered reads.
The following query shows average wait time for sequential versus scattered reads: 
prompt “AVERAGE WAIT TIME FOR READ REQUESTS”
select a.average_wait “SEQ READ”, b.average_wait “SCAT READ”
from sys.v_$system_event a, sys.v_$system_event b
where a.event = ‘db file sequential read’
and b.event = ‘db file scattered read’;
Explain about performance tuning enhancements?
Oracle includes many performance tuning enhancements like:
-Automatic Performance Diagnostic and Tuning Features
-Automatic Shared Memory Management – Automatic Shared Memory Management puts Oracle in control of allocating memory within the SGA
-Wait Model Improvements – A number of views have been updated and added to improve the wait model.
-Automatic Optimizer Statistics Collection – gathers optimizer statistics using a scheduled job called GATHER_STATS_JOB
-Dynamic Sampling – enables the server to improve performance
-CPU Costing – default cost model for the optimizer (CPU+I/O), with the cost unit as time Optimizer Hints
-Rule Based Optimizer Obsolescence – No more used
-Tracing Enhancements – End to End Application Tracing which allows a client process to be identified via the client identifier rather than the typical session id
-SAMPLE Clause Enhancements Hash Partitioned Global Indexes
You see multiple fragments in the SYSTEM tablespace, what should you check first?
Ensure that users don’t have the SYSTEM tablespace as their TEMPORARY or DEFAULT tablespace assignment by checking the DBA_USERS view.
What are some indications that you need to increase the SHARED_POOL_SIZE parameter?
Poor data dictionary or library cache hit ratios, getting error ORA-04031. Another indication is steadily decreasing performance with all other tuning parameters the same.
When should you increase copy latches? What parameters control copy latches?
When you get excessive contention for the copy latches as shown by the “redo copy” latch hit ratio. You can increase copy latches via the initialization parameter LOG_SIMULTANEOUS_COPIES to twice the number of CPUs on your system.
If you see statistics that deal with “undo” what are they really talking about?
Rollback segments and associated structures.
If a table space has a default pct increase of zero what will this cause (in relationship to the smon process)?
The SMON process won’t automatically coalesce its free space fragments.
How can you tell if a tablespace has excessive fragmentation?
If a select against the dba_free_space table shows that the count of a tablespaces extents is greater than the count of its data files, then it is fragmented.
Why and when should one tune?
One of the biggest responsibilities of a DBA is to ensure that the Oracle database is tuned properly. The Oracle RDBMS is highly tunable and allows the database to be monitored and adjusted to increase its performance.
One should do performance tuning for the following reasons:
The speed of computing might be wasting valuable human time (users waiting for response);
Enable your system to keep-up with the speed business is conducted; and Optimize hardware usage to save money (companies are spending millions on hardware).
Although this FAQ is not overly concerned with hardware issues, one needs to remember than you cannot tune a Buick into a Ferrari.
What database aspects should be monitored?
One should implement a monitoring system to constantly monitor the following aspects of a database. Writing custom scripts, implementing Oracle’s Enterprise Manager, or buying a third-party monitoring product can achieve this. If an alarm is triggered, the system should automatically notify the DBA (e-mail, page, etc.) to take appropriate action.
Infrastructure availability:
– Is the database up and responding to requests
– Are the listeners up and responding to requests
– Are the Oracle Names and LDAP Servers up and responding to requests
– Are the Web Listeners up and responding to requests
Things that can cause service outages:
 Is the archive log destination filling up?
– Objects getting close to their max extents
– Tablespaces running low on free space/ Objects what would not be able to extend
– User and process limits reached
Things that can cause bad performance:
See question “What tuning indicators can one use?”.
Where should the tuning effort be directed?
Consider the following areas for tuning. The order in which steps are listed needs to be maintained to prevent tuning side effects. For example, it is no good increasing the buffer cache if you can reduce I/O by rewriting a SQL statement.
Database Design (if it’s not too late):
Poor system performance usually results from a poor database design. One should generally normalize to the 3NF. Selective denormalization can provide valuable performance improvements. When designing, always keep the “data access path” in mind. Also look at proper data partitioning, data replication, aggregation tables for decision support systems, etc.
Application Tuning:
Experience showed that approximately 80% of all Oracle system performance problems are resolved by coding optimal SQL. Also consider proper scheduling of batch tasks after peak working hours.
Memory Tuning:
Properly size your database buffers (shared pool, buffer cache, log buffer, etc) by looking at your buffer hit ratios. Pin large objects into memory to prevent frequent reloads.
Disk I/O Tuning:
Database files needs to be properly sized and placed to provide maximum disk subsystem throughput. Also look for frequent disk sorts, full table scans, missing indexes, row chaining, data fragmentation, etc.
Eliminate Database Contention:
Study database locks, latches and wait events carefully and eliminate where possible.
Tune the Operating System:
Monitor and tune operating system CPU, I/O and memory utilization. For more information, read the related Oracle FAQ dealing with your specific operating system.
What tuning indicators can one use?
The following high-level tuning indicators can be used to establish if a database is performing optimally or not:
– Buffer Cache Hit Ratio
Formula: Hit Ratio = (Logical Reads – Physical Reads) / Logical Reads
Action: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) to increase hit ratio
– Library Cache Hit Ratio
Action: Increase the SHARED_POOL_SIZE to increase hit ratio
What tools/utilities does Oracle provide to assist with performance tuning?
Oracle provide the following tools/ utilities to assist with performance monitoring and tuning:
– TKProf
– UTLBSTAT.SQL and UTLESTAT.SQL – Begin and end stats monitoring
– Statspack
– Oracle Enterprise Manager – Tuning Pack
What is STATSPACK and how does one use it?
Statspack is a set of performance monitoring and reporting utilities provided by Oracle from Oracle8i and above. Statspack provides improved BSTAT/ESTAT functionality, though the old BSTAT/ESTAT scripts are still available. For more information about STATSPACK, read the documentation in file $ORACLE_HOME/rdbms/admin/spdoc.txt.
Install Statspack:
cd $ORACLE_HOME/rdbms/admin
sqlplus “/ as sysdba” @spdrop.sql — Install Statspack –
sqlplus “/ as sysdba” @spcreate.sql– Enter tablespace names when prompted
Use Statspack:
sqlplus perfstat/perfstat
exec statspack.snap; — Take a performance snapshots
exec statspack.snap;
Get a list of snapshots
select SNAP_ID, SNAP_TIME from STATS$SNAPSHOT;
@spreport.sql — Enter two snapshot id’s for difference report
Other Statspack Scripts:
– sppurge.sql – Purge a range of Snapshot Id’s between the specified begin and end Snap Id’s
– spauto.sql – Schedule a dbms_job to automate the collection of STATPACK statistics
– spcreate.sql – Installs the STATSPACK user, tables and package on a database (Run as SYS).
– spdrop.sql – Deinstall STATSPACK from database (Run as SYS)
– sppurge.sql – Delete a range of Snapshot Id’s from the database
– spreport.sql – Report on differences between values recorded in two snapshots
– sptrunc.sql – Truncates all data in Statspack tables
When is cost based optimization triggered?
It’s important to have statistics on all tables for the CBO (Cost Based Optimizer) to work correctly. If one table involved in a statement does not have statistics, Oracle has to revert to rule-based optimization for that statement. So you really want for all tables to have statistics right away; it won’t help much to just have the larger tables analyzed.
Generally, the CBO can change the execution plan when you:
Change statistics of objects by doing an ANALYZE;
Change some initialization parameters (for example: hash_join_enabled, sort_area_size,
db_file_multiblock_read_count).
How can one optimize %XYZ% queries?
It is possible to improve %XYZ% queries by forcing the optimizer to scan all the entries from the index instead of the table. This can be done by specifying hints.
If the index is physically smaller than the table (which is usually the case) it will take less time to scan the entire index than to scan the entire table.
Where can one find I/O statistics per table?
The UTLESTAT report shows I/O per tablespace but one cannot see what tables in the tablespace has the most I/O.
The $ORACLE_HOME/rdbms/admin/catio.sql script creates a sample_io procedure and table to gather the required information.
After executing the procedure, one can do a simple SELECT * FROM io_per_object; to extract the required information.
My query was fine last week and now it is slow. Why?
The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it to a previous one that was taken when the query was performing well. Usually the previous plan is not available.
Some factors that can cause a plan to change are:
– Which tables are currently analyzed? Were they previously analyzed? (ie. Was the query using RBO and now CBO?)
– Has OPTIMIZER_MODE been changed in INIT.ORA?
– Has the DEGREE of parallelism been defined/changed on any table?
– Have the tables been re-analyzed? Were the tables analyzed using estimate or compute? If estimate, what percentage was used?
– Have the statistics changed?
– Has the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been changed?
– Has the INIT.ORA parameter SORT_AREA_SIZE been changed?
– Have any other INIT.ORA parameters been changed?
– What do you think the plan should be? Run the query with hints to see if this produces the required performance.
Why is Oracle not using the damn index?
This problem normally only arises when the query plan is being generated by the Cost Based Optimizer. The usual cause is because the CBO calculates that executing a Full Table Scan would be faster than accessing the table via the index. Fundamental things that can be checked are:
-USER_TAB_COLUMNS.NUM_DISTINCT – This column defines the number of distinct values the column holds.
-USER_TABLES.NUM_ROWS – If NUM_DISTINCT = NUM_ROWS then using an index would be preferable to doing a FULL TABLE SCAN. As the NUM_DISTINCT decreases, the cost of using an index increase thereby is making the index less desirable.
-USER_INDEXES.CLUSTERING_FACTOR – This defines how ordered the rows are in the
index. If CLUSTERING_FACTOR approaches the number of blocks in the table, the rows are ordered. If it approaches the number of rows in the table, the rows are randomly ordered. In such a case, it is unlikely that index entries in the same leaf block will point to rows in the same data blocks.
-Decrease the INIT.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT – A higher value will make the cost of a FULL TABLE SCAN cheaper.
-Remember that you MUST supply the leading column of an index, for the index to be used (unless you use a FAST FULL SCAN or SKIP SCANNING).
-There are many other factors that affect the cost, but sometimes the above can help to show why an index is not being used by the CBO. If from checking the above you still feel that the query should be using an index, try specifying an index hint. Obtain an explain plan of the query either using TKPROF with TIMED_STATISTICS, so that one can see the CPU utilization, or with AUTOTRACE to see the statistics. Compare this to the explain plan when not using an index.
When should one rebuild an index?
You can run the ‘ANALYZE INDEX VALIDATE STRUCTURE’ command on the affected indexes – each invocation of this command creates a single row in the INDEX_STATS view. This row is overwritten by the next ANALYZE INDEX command, so copy the contents of the view into a local table after each ANALYZE. The ‘badness’ of the index can then be judged by the ratio of ‘DEL_LF_ROWS’ to ‘LF_ROWS’.
How does one tune Oracle Wait events?
Some wait events from V$SESSION_WAIT and V$SYSTEM_EVENT views:
Event Name: Tuning Recommendation:
db file sequential read Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute I/O across disks.
buffer busy waits Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze contention from SYS.V$BH
log buffer space Increase LOG_BUFFER parameter or move log files to faster disks

 Explain how to tune the Redo log buffer?
For logging the events that took place in the database, Redo log buffer is used. That means keeping the list of changes that occurred to a particular database. The information regarding the events are stored in redo entries, which is required later to undo the changes that has been made to database.
Steps involved in tuning the Redo log buffer:
I. Using LOG_BUFFER parameter, first identify the size of the Redo log buffer.
II. Make sure that the waits must be equal to zero. If it is not, make it zero by increasing the size of the buffer.
III. Finally, determine the number of times the processes have to wait and also how long.
What does Database Tuning contain mainly?
The Database tuning, mainly contains Wait events and Hit ratios.


Explain the use of TKPROF?

The TKPROF is a tuning tool which is used to define the execution times for SQL statements and CPU. You can use it by first setting timed_statistics to be true in the initialization file and then making the tracing to turn on either for the session using the ALTER session command or the for the entire database through the SQL_trace parameter. Once you get the trace file, you can execute the tkprof tool across the trace file and then you can view at the result from the tkprof tool. It can also be used to extract explain plan result.

What is Latch Free Event?

The Latch free Event occurs when the latch is held by another process, when one process is waiting for it.

What kind of information can be obtained from summary advisor?

A summary advisor is nothing but a tool that is used for understanding and choosing materialized views. This can be achieved by choosing the appropriate set of materialized views for a specific workload and it also helps in getting information about activity log details, materialized view usage and materialized view recommendations.

What is the fastest query method for a table?

The fastest query method for a table is to fetch by using Row id.

What are the tools provided by oracle to assist performance tuning?

The following are the tools provided by oracle to assist performance tuning:
SQL Performance Analyzer (SPA) is similar to Database replay, but with a few differences. It doesn’t perform the recording.
Oracle Enterprise Manager (OEM) is a set of tools, which helps the management of all factors of an Oracle database instances, Oracle infrastructure, Oracle web servers and application servers.
Statspack is provided by Oracle to perform reporting and monitoring.
Automatic Workload Repository (AWR) is a tool that is built in every Oracle database and after every 60 minutes, it captures the snapshot of workloads and all key statistics in the database by default.

TKProf is used to assess the efficiency of the SQL statements accurately during the application run.

Oracle Query performance tuning : 
Source :Internet.