> What is
Dataguard ?
Data Guard
provides a comprehensive set of services that create, maintain, manage, and
monitor one or more standby databases to enable production Oracle databases to
survive disasters and data corruptions. Data Guard maintains these standby
databases as copies of the
production database. Data Guard can be used with traditional backup, restoration, and cluster techniques to provide a high level of data protection and data availability.
production database. Data Guard can be used with traditional backup, restoration, and cluster techniques to provide a high level of data protection and data availability.
> What is DG
Broker ?
DG Broker "it is the management and monitoring tool".
Oracle dataguard broker is a distributed management framework that automates and centralizes the creation , maintenance and monitoring of DG configuration.
All management operations can be performed either through OEM, which uses the broker (or) broker specified command-line tool interface "DGMGRL".
DG Broker "it is the management and monitoring tool".
Oracle dataguard broker is a distributed management framework that automates and centralizes the creation , maintenance and monitoring of DG configuration.
All management operations can be performed either through OEM, which uses the broker (or) broker specified command-line tool interface "DGMGRL".
> What is the
difference between Dataguard and Standby ?
Dataguard :
Dataguard is mechanism/tool to maintain standby database.
The dataguard is set up between primary and standby instance .
Data Guard is only available on Enterprise Edition.
Standby Database :
Physical standby database provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis.
Standby capability is available on Standard Edition.
Dataguard :
Dataguard is mechanism/tool to maintain standby database.
The dataguard is set up between primary and standby instance .
Data Guard is only available on Enterprise Edition.
Standby Database :
Physical standby database provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis.
Standby capability is available on Standard Edition.
> What are the
differences between Physical/Logical standby databases ? How would you decide
which one is best suited for your environment ?
Physical standby DB:
As the name, it is physically (datafiles, schema, other physical identity) same copy of the primary database.
It synchronized with the primary database with Apply Redo to the standby DB.
Logical Standby DB:
As the name logical information is the same as the production database, it may be physical structure can be different. It synchronized with primary database though SQL Apply, Redo received from the primary database into SQL statements and then executing these SQL statements on the standby DB. We can open "physical stand by DB to "read only" and make it available to the applications users (Only select is allowed during this period). we can not apply redo logs received from primary database at this time. We do not see such issues with logical standby database. We can open the database in normal mode and make it available to the users. At the same time, we can apply archived logs received from primary database.
Physical standby DB:
As the name, it is physically (datafiles, schema, other physical identity) same copy of the primary database.
It synchronized with the primary database with Apply Redo to the standby DB.
Logical Standby DB:
As the name logical information is the same as the production database, it may be physical structure can be different. It synchronized with primary database though SQL Apply, Redo received from the primary database into SQL statements and then executing these SQL statements on the standby DB. We can open "physical stand by DB to "read only" and make it available to the applications users (Only select is allowed during this period). we can not apply redo logs received from primary database at this time. We do not see such issues with logical standby database. We can open the database in normal mode and make it available to the users. At the same time, we can apply archived logs received from primary database.
For OLTP large
transaction database it is better to choose logical standby database.
> Explain Active
Dataguard ?
11g Active Data Guard - Oracle Active Data Guard enables read-only access to a physical standby database for queries, sorting, reporting, web-based access, etc., while continuously applying changes received from the production database. Oracle Active Data Guard also enables the use of fast incremental backups when offloading backups to a standby database, and can provide additional benefits of high availability and disaster protection against planned or unplanned outages at the production site.
11g Active Data Guard - Oracle Active Data Guard enables read-only access to a physical standby database for queries, sorting, reporting, web-based access, etc., while continuously applying changes received from the production database. Oracle Active Data Guard also enables the use of fast incremental backups when offloading backups to a standby database, and can provide additional benefits of high availability and disaster protection against planned or unplanned outages at the production site.
> What is a
Snapshot Standby Database ?
11g Snapshot Standby Database
Oracle 11g introduces the Snapshot Standby database which essentially is an updateable standby database which has been created from a physical standby database. We can convert a physical standby database to a snapshot standby database, do some kind of testing on a database which is a read write copy of the current primary or production database and then finally revert it to it's earlier state as a physical standby database.
11g Snapshot Standby Database
Oracle 11g introduces the Snapshot Standby database which essentially is an updateable standby database which has been created from a physical standby database. We can convert a physical standby database to a snapshot standby database, do some kind of testing on a database which is a read write copy of the current primary or production database and then finally revert it to it's earlier state as a physical standby database.
While the snapshot
standby database is open in read-write mode, redo is being received from the
primary database, but is not applied. After converting it back to a physical
standby database, it is resynchronized with the primary by applying the
accumalated redo data which was earlier shipped from the primary database but
not applied. Using a snapshot standby, we are able to do real time application
testing using near real time production data. Very often we are required to do
production clones for the purpose of testing. But using snapshot standby
databases we can meet the same requirement sparing the effort,time,resources
and disk space.
Snapshot Standby
Database (UPDATEABLE SNAPSHOT FOR TESTING)
A snapshot standby database is a fully updatable standby database that is created by converting a physical standby database into a snapshot standby database.
A snapshot standby database is a fully updatable standby database that is created by converting a physical standby database into a snapshot standby database.
Like a physical
or logical standby database, a snapshot standby database receives and archives
redo data from a primary database. Unlike a physical or logical standby database,
a snapshot standby database does not apply the redo data that it receives. The
redo data received by a snapshot standby database is not applied until the
snapshot standby is converted back into a physical standby database, after
first discarding any local updates made to the snapshot standby database.
> What is the
Default mode will the Standby will be, either SYNC or ASYNC ?
ASYNC
ASYNC
> Dataguard
Architechture ?
Data Guard Configurations:
A Data Guard configuration consists of one production database and one or more standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located, provided they can communicate with each other.
Data Guard Configurations:
A Data Guard configuration consists of one production database and one or more standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located, provided they can communicate with each other.
Dataguard
Architecture
The Oracle 9i Data Guard architecture incorporates the following items:
The Oracle 9i Data Guard architecture incorporates the following items:
• Primary
Database – A production database that is used to create standby databases. The
archive logs from the primary database are transfered and applied to standby databases.
Each standby can only be associated with a single primary database, but a
single primary database can be associated with multiple standby databases.
• Standby Database – A replica of the primary database.
• Log Transport Services – Control the automatic transfer of archive redo log files from the primary database to one or more standby destinations.
• Network Configuration – The primary database is connected to one or more standby databases using Oracle Net.
• Log Apply Services – Apply the archived redo logs to the standby database. The Managed Recovery Process (MRP) actually does the work of maintaining and applying the archived redo logs.
• Role Management Services – Control the changing of database roles from primary to standby. The services include switchover, switchback and failover.
• Data Guard Broker – Controls the creation and monitoring of Data Guard. It comes with a GUI and command line interface.
• Standby Database – A replica of the primary database.
• Log Transport Services – Control the automatic transfer of archive redo log files from the primary database to one or more standby destinations.
• Network Configuration – The primary database is connected to one or more standby databases using Oracle Net.
• Log Apply Services – Apply the archived redo logs to the standby database. The Managed Recovery Process (MRP) actually does the work of maintaining and applying the archived redo logs.
• Role Management Services – Control the changing of database roles from primary to standby. The services include switchover, switchback and failover.
• Data Guard Broker – Controls the creation and monitoring of Data Guard. It comes with a GUI and command line interface.
Primary Database:
A Data Guard configuration contains one production database, also referred to as the primary database, that functions in the primary role. This is the database that is accessed by most of your applications.
A Data Guard configuration contains one production database, also referred to as the primary database, that functions in the primary role. This is the database that is accessed by most of your applications.
Standby Database:
A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary database, you can create up to nine standby databases and incorporate them in a Data Guard configuration. Once created, Data Guard automatically maintains each standby database by transmitting redo data from the primary database and then applying the redo to the standby database.
The types of standby databases are as follows:
A standby database is a transactionally consistent copy of the primary database. Using a backup copy of the primary database, you can create up to nine standby databases and incorporate them in a Data Guard configuration. Once created, Data Guard automatically maintains each standby database by transmitting redo data from the primary database and then applying the redo to the standby database.
The types of standby databases are as follows:
Physical standby
database:
Provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. A physical standby database is kept synchronized with the primary database, through Redo Apply, which recovers the redo data received from the primary database and applies the redo to the physical standby database.
Provides a physically identical copy of the primary database, with on disk database structures that are identical to the primary database on a block-for-block basis. The database schema, including indexes, are the same. A physical standby database is kept synchronized with the primary database, through Redo Apply, which recovers the redo data received from the primary database and applies the redo to the physical standby database.
Logical standby
database:
Contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database through SQL Apply, which transforms the data in the redo received from the primary database into SQL statements and then executes the SQL statements on the standby database.
Contains the same logical information as the production database, although the physical organization and structure of the data can be different. The logical standby database is kept synchronized with the primary database through SQL Apply, which transforms the data in the redo received from the primary database into SQL statements and then executes the SQL statements on the standby database.
> What are the
services required on the primary and standby database ?
The services required on the primary database are:
• Log Writer Process (LGWR) – Collects redo information and updates the online redo logs. It can also create local archived redo logs and transmit online redo to standby databases.
• Archiver Process (ARCn) – One or more archiver processes make copies of online redo logs either locally or remotely for standby databases.
• Fetch Archive Log (FAL) Server – Services requests for archive redo logs from FAL clients running on multiple standby databases. Multiple FAL servers can be run on a primary database, one for each FAL request. The services required on the standby database are:
• Fetch Archive Log (FAL) Client – Pulls archived redo log files from the primary site. Initiates transfer of archived redo logs when it detects a gap sequence.
• Remote File Server (RFS) – Receives archived and/or standby redo logs from the primary database.
• Archiver (ARCn) Processes – Archives the standby redo logs applied by the managed recovery process (MRP).
• Managed Recovery Process (MRP) – Applies archive redo log information to the standby database.
The services required on the primary database are:
• Log Writer Process (LGWR) – Collects redo information and updates the online redo logs. It can also create local archived redo logs and transmit online redo to standby databases.
• Archiver Process (ARCn) – One or more archiver processes make copies of online redo logs either locally or remotely for standby databases.
• Fetch Archive Log (FAL) Server – Services requests for archive redo logs from FAL clients running on multiple standby databases. Multiple FAL servers can be run on a primary database, one for each FAL request. The services required on the standby database are:
• Fetch Archive Log (FAL) Client – Pulls archived redo log files from the primary site. Initiates transfer of archived redo logs when it detects a gap sequence.
• Remote File Server (RFS) – Receives archived and/or standby redo logs from the primary database.
• Archiver (ARCn) Processes – Archives the standby redo logs applied by the managed recovery process (MRP).
• Managed Recovery Process (MRP) – Applies archive redo log information to the standby database.
> What is RTS
(Redo Transport Services) in Dataguard ?
It controls the automated transfer of redo data from the production database to one or more archival destinations. The redo transport services perform the following tasks:
a) Transmit redo data from the primary system to the standby systems in the configuration.
b) Manage the process of resolving any gaps in the archived redo log files due to a network failure.
c) Automatically detect missing or corrupted archived redo log files on a standby system and automatically retrieve replacement archived redo log files from the primary database or another standby database.
It controls the automated transfer of redo data from the production database to one or more archival destinations. The redo transport services perform the following tasks:
a) Transmit redo data from the primary system to the standby systems in the configuration.
b) Manage the process of resolving any gaps in the archived redo log files due to a network failure.
c) Automatically detect missing or corrupted archived redo log files on a standby system and automatically retrieve replacement archived redo log files from the primary database or another standby database.
> What are the
Protection Modes in Dataguard ?
Data Guard Protection Modes
This section describes the Data Guard protection modes.
In these descriptions, a synchronized standby database is meant to be one that meets the minimum requirements of the configured data protection mode and that does not have a redo gap. Redo gaps are discussed in Section 6.3.3.
Data Guard Protection Modes
This section describes the Data Guard protection modes.
In these descriptions, a synchronized standby database is meant to be one that meets the minimum requirements of the configured data protection mode and that does not have a redo gap. Redo gaps are discussed in Section 6.3.3.
Maximum
Availability
This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one synchronized standby database. If the primary database cannot write its redo stream to at least one synchronized standby database, it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database. This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.
This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one synchronized standby database. If the primary database cannot write its redo stream to at least one synchronized standby database, it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database. This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.
Maximum
Performance
This protection mode provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log. Redo data is also written to one or more standby databases, but this is done asynchronously with respect to transaction commitment, so primary database performance is unaffected by delays in writing redo data to the standby database(s). This protection mode offers slightly less data protection than maximum availability mode and has minimal impact on primary database performance. This is the default protection mode.
This protection mode provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log. Redo data is also written to one or more standby databases, but this is done asynchronously with respect to transaction commitment, so primary database performance is unaffected by delays in writing redo data to the standby database(s). This protection mode offers slightly less data protection than maximum availability mode and has minimal impact on primary database performance. This is the default protection mode.
Maximum Protection
This protection mode ensures that zero data loss occurs if a primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one synchronized standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions, if it cannot write its redo stream to at least one synchronized standby database. Because this data protection mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used to protect a primary database that runs in maximum protection mode to prevent a single standby database failure from causing the primary database to shut down.
This protection mode ensures that zero data loss occurs if a primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one synchronized standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions, if it cannot write its redo stream to at least one synchronized standby database. Because this data protection mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used to protect a primary database that runs in maximum protection mode to prevent a single standby database failure from causing the primary database to shut down.
> How to delay
the application of logs to a physical standby ?
A standby database automatically applies redo logs when they arrive from the primary database. But in some cases, we want to create a time lag between the archiving of a redo log at the primary site, and the application of the log at the standby site.
A standby database automatically applies redo logs when they arrive from the primary database. But in some cases, we want to create a time lag between the archiving of a redo log at the primary site, and the application of the log at the standby site.
Modify the
LOG_ARCHIVE_DEST_n initialization parameter on the primary database to set a
delay for the standby database.
Example: For 60min Delay:
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=stdby_srvc DELAY=60';
The DELAY attribute is expressed in minutes.
The archived redo logs are still automatically copied from the primary site to the standby site, but the logs are not immediately applied to the standby database. The logs are applied when the specified time interval expires.
ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=stdby_srvc DELAY=60';
The DELAY attribute is expressed in minutes.
The archived redo logs are still automatically copied from the primary site to the standby site, but the logs are not immediately applied to the standby database. The logs are applied when the specified time interval expires.
> What is the
difference between DB file sequential read and DB File Scattered Read ?
DB file
sequential read is associated with index read where as DB File Scattered Read
has to do with full table scan. The DB file sequential read, reads block into
contiguous memory and DB File scattered read gets from multiple block and
scattered them into buffer cache.
> Which
factors are to be considered for creating index on Table? How to select column
for index ?
Creation of index on table depends on size of table, volume of data. If size of table is large and we need only few data for selecting or in report then we need to create index. There are some basic reason of selecting column for indexing like cardinality and frequent usage in where condition of select query. Business rule is also forcing to create index like primary key, because configuring primary key or unique key automatically create unique index.
It is important to note that creation of so many indexes would affect the performance of DML on table because in single transaction should need to perform on various index segments and table simultaneously.
Creation of index on table depends on size of table, volume of data. If size of table is large and we need only few data for selecting or in report then we need to create index. There are some basic reason of selecting column for indexing like cardinality and frequent usage in where condition of select query. Business rule is also forcing to create index like primary key, because configuring primary key or unique key automatically create unique index.
It is important to note that creation of so many indexes would affect the performance of DML on table because in single transaction should need to perform on various index segments and table simultaneously.
> Is creating
index online possible ?
YES. You can create and rebuild indexes online. This enables you to update base tables at the same time you are building or rebuilding indexes on that table. You can perform DML operations while the index building is taking place, but DDL operations are not allowed.
YES. You can create and rebuild indexes online. This enables you to update base tables at the same time you are building or rebuilding indexes on that table. You can perform DML operations while the index building is taking place, but DDL operations are not allowed.
Parallel
execution is not supported when creating or rebuilding an index online.
CREATE INDEX emp_name ON emp (mgr, emp1, emp2, emp3) ONLINE;
CREATE INDEX emp_name ON emp (mgr, emp1, emp2, emp3) ONLINE;
> How to
recover password in oracle 10g ?
You can query with the table user_history$. The password history is store in this table.
You can query with the table user_history$. The password history is store in this table.
> How can you
track the password change for a user in oracle ?
Oracle only tracks the date that the password will expire based on when it was latest changed. Thus listing the view
Oracle only tracks the date that the password will expire based on when it was latest changed. Thus listing the view
DBA_USERS.EXPIRY_DATE
and subtracting PASSWORD_LIFE_TIME you can determine when password was last
changed. You can also check the last password change time directly from the
PTIME column in USER$ table (on which DBA_USERS view is based). But If you have
PASSWORD_REUSE_TIME and/or PASSWORD_REUSE_MAX set in a profile assigned to a
user account then you can reference dictionary table USER_HISTORY$ for when the
password was changed for this account.
SELECT user$.NAME, user$.PASSWORD, user$.ptime, user_history$.password_date
FROM SYS.user_history$, SYS.user$
WHERE user_history$.user# = user$.user#;
SELECT user$.NAME, user$.PASSWORD, user$.ptime, user_history$.password_date
FROM SYS.user_history$, SYS.user$
WHERE user_history$.user# = user$.user#;
> What is
Secure External password Store (SEPS) ?
Through the use of SEPS you can store password credentials for connecting to database by using a client side oracle wallet, thiswall et stores signing credentials. This feature introduced since oracle 10g. Thus the application code, scheduled job, scripts no longer needed embedded username and passwords. This reduces risk because the passwords are no longer exposed and password management policies are more easily enforced without changing application code whenever username and password change.
Through the use of SEPS you can store password credentials for connecting to database by using a client side oracle wallet, thiswall et stores signing credentials. This feature introduced since oracle 10g. Thus the application code, scheduled job, scripts no longer needed embedded username and passwords. This reduces risk because the passwords are no longer exposed and password management policies are more easily enforced without changing application code whenever username and password change.
> Why we need
CASCADE option with DROP USER command whenever dropping a user and why
"DROP USER" commands fails when we don't use it ?
If a user having any object then 'YES' in that case you are not able to drop that user without using CASCADE option. The DROP USER with CASCADE option command drops user along with its all associated objects. Remember it is a DDL command after the execution of this command rollback cannot be performed.
If a user having any object then 'YES' in that case you are not able to drop that user without using CASCADE option. The DROP USER with CASCADE option command drops user along with its all associated objects. Remember it is a DDL command after the execution of this command rollback cannot be performed.
> What is the
difference between Redo,Rollback and Undo ?
I find there is always some confusion when talking about Redo, Rollback and Undo. They all sound like pretty much the same thing or at least pretty close.
Redo: Every Oracle database has a set of (two or more) redo log files. The redo log records all changes made to data, including both uncommitted and committed changes. In addition to the online redo logs Oracle also stores archive redo logs. All redo logs are used in recovery situations. Rollback: More specifically rollback segments. Rollback segments store the data as it was before changes were made. This is in contrast to the redo log which is a record of the insert/update/deletes.
Undo: Rollback segments. They both are really one in the same. Undo data is stored in the undo tablespace. Undo is helpful in building a read consistent view of data.
I find there is always some confusion when talking about Redo, Rollback and Undo. They all sound like pretty much the same thing or at least pretty close.
Redo: Every Oracle database has a set of (two or more) redo log files. The redo log records all changes made to data, including both uncommitted and committed changes. In addition to the online redo logs Oracle also stores archive redo logs. All redo logs are used in recovery situations. Rollback: More specifically rollback segments. Rollback segments store the data as it was before changes were made. This is in contrast to the redo log which is a record of the insert/update/deletes.
Undo: Rollback segments. They both are really one in the same. Undo data is stored in the undo tablespace. Undo is helpful in building a read consistent view of data.
You have more
than 3 instances running on the Linux server ? How can you determine which
shared memory and semaphores are associated with which instance?
Oradebug is undocumented oracle supplied utility by oracle. The oradebug help command list the command available with oracle.
SQL>oradebug setmypid
SQL>oradebug ipc
SQL>oradebug tracfile_name
Oradebug is undocumented oracle supplied utility by oracle. The oradebug help command list the command available with oracle.
SQL>oradebug setmypid
SQL>oradebug ipc
SQL>oradebug tracfile_name
> Why drop
table is not going into Recycle bin ?
If you are using SYS user to drop any table then user's object will not go to the recyclebin as there is no recyclebin for SYSTEM tablespace, even we have already SET recycle bin parameter TRUE.
Select * from v$parameter where name = 'recyclebin';
Show parameter recyclebin;
If you are using SYS user to drop any table then user's object will not go to the recyclebin as there is no recyclebin for SYSTEM tablespace, even we have already SET recycle bin parameter TRUE.
Select * from v$parameter where name = 'recyclebin';
Show parameter recyclebin;
> Temp
Tablespace is 100% FULL and there is no space available to add datafiles to
increase temp tablespace. What can you do in that case to free up TEMP
tablespace ?
Try to close some of the idle sessions connected to the database will help you to free some TEMP space. Otherwise you can also use
Try to close some of the idle sessions connected to the database will help you to free some TEMP space. Otherwise you can also use
'Alter Tablespace
PCTINCREASE 1' followed by 'Alter Tablespace PCTINCREASE 0'
> What is Row
Chaning and Row Migration ?
Row Migration:
A row migrates when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently). A migration means that the entire row will move and we just leave behind the «forwarding address». So, the original block just has the rowid of the new block and the entire row is moved.
Row Migration:
A row migrates when an update to that row would cause it to not fit on the block anymore (with all of the other data that exists there currently). A migration means that the entire row will move and we just leave behind the «forwarding address». So, the original block just has the rowid of the new block and the entire row is moved.
Row Chaining:
A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces. Some conditions that will cause row chaining are: Tables whose rowsize exceeds the blocksize. Tables with LONG and LONG RAW columns are prone to having chained rows. Tables with more then 255 columns will have chained rows as Oracle break wide tables up into pieces. So, instead of just having a forwarding address on one block and the data on another we have data on two or more blocks.
A row is too large to fit into a single database block. For example, if you use a 4KB blocksize for your database, and you need to insert a row of 8KB into it, Oracle will use 3 blocks and store the row in pieces. Some conditions that will cause row chaining are: Tables whose rowsize exceeds the blocksize. Tables with LONG and LONG RAW columns are prone to having chained rows. Tables with more then 255 columns will have chained rows as Oracle break wide tables up into pieces. So, instead of just having a forwarding address on one block and the data on another we have data on two or more blocks.
> How to find
out background processes ?
SQL> select SID,PROGRAM from v$session where TYPE='BACKGROUND';
SQL> select name,description from V$bgprocess;
SQL> select SID,PROGRAM from v$session where TYPE='BACKGROUND';
SQL> select name,description from V$bgprocess;
>What
is Consistent Backup?
A
Consistent backup is one in which the files being backed up contain all changes
upto the same system change number (SCN).
> What
is fractured Block?
Because
the database continues writing to the file during an online backup, there is
the possibility of backing up inconsistent data within a block. For example,
assume that either RMAN or an operating system utility reads the block while
database writer is in the middle of updating the block. In this case, RMAN or
the copy utility could read the old data in the top half of the block and the
new data in the bottom top half of the block. The block is a fractured block,
meaning that the data in this block is not consistent.
> What
are the steps to performing complete recovery on the whole database?
1.Mount
the database Ensure that all datafiles you want to recover
are online
2.Restore
a backup of the whole database or the files you want to recover
3.Apply
online or archived redo logs, or a combination of the two.
>What
are the steps to performing complete recovery on a tablespace or datafile?
Take
the tablespace or datafile to be recovered offline if the database is open
Restore
a backup of the datafiles you want to recover
Apply
online or archived redo logs, or a combination of the two.
>What
are the components of physical database structure of Oracle database?
Oracle
database is comprised of three types of files. One or more datafiles, two are
more redo log files, and one or more control files.
>What
are the components of logical database structure of Oracle database?
There
are tablespaces and database’s schema objects.
>What
is a tablespace?
A
database is divided into Logical Storage Unit called tablespaces. A tablespace
is used to grouped related logical structures together.
>What
is SYSTEM tablespace and when is it created?
Every
Oracle database contains a tablespace named SYSTEM, which is automatically
created when the database is created. The SYSTEM tablespace always contains the
data dictionary tables for the entire database.
>Explain
the relationship among database, tablespace and data file?
Each
databases logically divided into one or more tablespaces one or more data files
are explicitly created for each tablespace.
>What
is schema?
A
schema is collection of database objects of a user.
>What
are Schema Objects?
Schema
objects are the logical structures that directly refer to the database’s data.
Schema objects include tables, views, sequences, synonyms, indexes, clusters,
database triggers, procedures, functions packages and database links.
>Can
objects of the same schema reside in different tablespaces?
Yes.
>What
is Oracle table?
A
table is the basic unit of data storage in an Oracle database. The tables of a
database hold all of the user accessible data. Table data is stored in rows and
columns.
>What
is an Oracle view?
A
view is a virtual table. Every view has a query attached to it. (The query is a
SELECT statement that identifies the columns and rows of the table(s) the view
uses.)
>What
are the advantages of views?
Provide
an additional level of table security, by restricting access to a predetermined
set of rows and columns of a table.
–
Hide data complexity.
–
Simplify commands for the user.
–
Present the data in a different perspective from that of the base table.
–
Store complex queries.
>What
is an Oracle sequence?
A
sequence generates a serial list of unique numbers for numerical columns of a
database’s tables.
>What
is a synonym?
A
synonym is an alias for a table, view, sequence or program unit.
>What
are the types of synonyms?
There
are two types of synonyms private and public.
Only
its owner can access a private synonym.
Any
database user can access a public synonym.
>What
are synonyms used for?
Mask
the real name and owner of an object. – Provide public access to an object –
Provide location transparency for tables, views or program units of a remote
database. – Simplify the SQL statements for database users.
Do you like this post? Please share this article.
HTML Link Code:
Post a Comment