Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Tuesday 25 November 2014

Guaranteed Restore Points

Fantastic feature called guaranteed restore point came with Oracle 10G R2. By this feature, you can set a restore point which will remain until you drop it. The best part of this feature is, it uses db_file_recovery_dest to keep the changed blocks, even if flashback logging is not enabled for your database. The only thing you must do is setting db_file_recovery_dest parameter and create a guaranteed restore point (for the first restore point you must be in mount mode). Before image of changed blocks, will be kept on flashback logs after you created the guaranteed restore point. Flashback logs of guaranteed restore points are not big because of keeping before image of changed blocks for only 1 time after first change.
To create first guaranteed restore point when flashback off, you first start the database in mount state after a consistent shut down. After opening in mount state you create your first guaranteed restore point like below.

SQL>CREATE RESTORE POINT before_upgrade GUARANTEE FLASHBACK DATABASE;

Subsequent guaranteed restore points can be created when database is open. When you want to see the available guaranteed restore points you can use the select below

SQL>SELECT NAME, SCN, TIME, DATABASE_INCARNATION# DI,GUARANTEE_FLASHBACK_DATABASE, STORAGE_SIZE
FROM V$RESTORE_POINT
WHERE GUARANTEE_FLASHBACK_DATABASE=’YES';
NAME SCN TIME DI GUA STORAGE_SIZE
——————– – ——— —————————— ———- — ————
BEFORE_UPGRADE 1011771 07.08.2007 14:14:56,0000 5 YES 0
BEFORE_TRUNCATE 1013509 07.08.2007 14:20:42,0000 5 YES 8192000


When you want to flashback to the guaranteed restore point you must run the command below from rman when DB is in mount mode.


RMAN> FLASHBACK DATABASE TO RESTORE POINT ‘BEFORE_UPGRADE';


You can drop the restore point if you don’t need it anymore. Then you drop the guaranteed restore point ,the relevant logs will be deleted too.


SQL> drop restore point before_upgrade;
Restore point dropped.


What you have to consider is, to keep the flashback logs for the restore point. No file in the flash recovery area is eligible for deletion, if it is required to satisfy the
guarantee. To monitor the db_file_recovery_area usage you can use the query below.


SQL> SELECT * FROM V$RECOVERY_FILE_DEST;NAME SPACE_LIMIT SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES
——————– ———– ———- —————– —————
D:\oracle\oradata\fl 1098907648 500303872 0 12
ash_recover_area


If you loose flashback logs or db_file_recovery_dest is full, you should face instance termination. For instance I face with, ORA-38760: This database instance failed to turn on flashback database, error in the middle of 10.2.0.3 upgrade (I had to remove some of flashback logs in db_file_recovery_area during upgrade operation because of space lack) .
The database was not opening because of missed flashback logs. The word guarantee was working quite perfect :) When I drop the guaranteed restore point, database logged the lines below in the alert log and let me open the database smoothly at last.


Deleted Oracle managed file D:\ORACLE\ORADATA\FLASH_RECOVER_AREA\DABAK\FLASHBACK\O1_MF_3C674SNO_.FLB





  

Thursday 20 November 2014

ORA-01555 snapshot too old

What is "snapshot too old" error and how to avoid it?

First thing to be understood here is how rollback segments work. These work in a circular fashion by looping around to reuse the extents which have been released by committed transactions. Thus in case of long running queries if data is changed while the query is running, rollback segments created for that change are needed to make a read consistent image for the lon running query. When these extents are reused while these were still need by the query, this error occurs, because oracle can no longer provide a read consistent image of the data.
To avoid this error you need to postpone the reuse of extents. Thus increasing the size of rollback segments and the value of OPTIMAL parameter should do the trick. Committing less often would be other solution. As until transaction is c omitted rollback segment can not be reused thus reducing the chances of a snapshot too old error.

You'll probably need to dive into Oracle DB administration if you want to solve it via increasing the UNDO log. Basically you do (as SYSDBA):
 
SQL> show parameter undo

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
undo_management                      string      AUTO
undo_retention                       integer     900
undo_tablespace                      string      UNDOTS1
 

 
SQL> ALTER SYSTEM SET UNDO_RETENTION = 21600;
 
21600 is 6 hours in seconds.
However, Oracle will only keep 6 hours of old data if the UNDO log files are big enough, which depends on the size of the rollback segments and the amount of updates executed on the database.
So in addition to changing the undo retention time, you should also make sure that few concurrent updates are executed while your job is running. In particular, updates of the data your job is reading should be minimized.
If everything fails, increase the UNDO logs.

Friday 14 November 2014

Oracle Database Application Development


Following link describe the languages and datatypes included with Oracle that can be used in application development. It contains the following chapters:

Oracle Database Features

Oracle Database Architecture

Overview of Datatypes

Each column value and constant in a SQL statement has a datatype, which is associated with a specific storage format, constraints, and a valid range of values. When you create a table, you must specify a datatype for each of its columns.
Oracle provides the following built-in datatypes:
  • Character datatypes
  • Numeric datatypes
  • DATE datatype
  • LOB datatypes
  • RAW and LONG RAW datatypes
  • ROWID and UROWID datatypes
New object types can be created from any built-in database types or any previously created object types, object references, and collection types. Metadata for user-defined types is stored in a schema available to SQL, PL/SQL, Java, and other published interfaces.
An object type differs from native SQL datatypes in that it is user-defined, and it specifies both the underlying persistent data (attributes) and the related behaviors (methods). Object types are abstractions of the real-world entities, for example, purchase orders.
Object types and related object-oriented features, such as variable-length arrays and nested tables, provide higher-level ways to organize and access data in the database. Underneath the object layer, data is still stored in columns and tables, but you can work with the data in terms of the real-world entities--customers and purchase orders, for example--that make the data meaningful. Instead of thinking in terms of columns and tables when you query the database, you can simply select a customer.


Overview of Globalization

Oracle databases can be deployed anywhere in the world, and a single instance of an Oracle database can be accessed by users across the globe. Information is presented to each user in the language and format specific to his or her location.
The Globalization Development Kit (GDK) simplifies the development process and reduces the cost of developing internet applications for a multilingual market. GDK lets a single program work with text in any language from anywhere in the world.

Oracle Database Application Development

SQL and PL/SQL form the core of Oracle's application development stack. Not only do most enterprise back-ends run SQL, but Web applications accessing databases do so using SQL (wrappered by Java classes as JDBC), Enterprise Application Integration applications generate XML from SQL queries, and content-repositories are built on top of SQL tables. It is a simple, widely understood, unified data model. It is used standalone in many applications, but it is also invoked directly from Java (JDBC), Oracle Call Interface (OCI), Oracle C++ Call Interface (OCCI), or XSU (XML SQL Utility). Stored packages, procedures, and triggers can all be written in PL/SQL or in Java.


Overview of Oracle SQL

SQL (pronounced SEQUEL) is the programming language that defines and manipulates the database. SQL databases are relational databases, which means that data is stored in a set of simple relations.

SQL Statements

All operations on the information in an Oracle database are performed using SQL statements. A SQL statement is a string of SQL text. A statement must be the equivalent of a complete SQL sentence, as in:
SELECT last_name, department_id FROM employees; 

Only a complete SQL statement can run successfully. A sentence fragment, such as the following, generates an error indicating that more text is required:
SELECT last_name 

A SQL statement can be thought of as a very simple, but powerful, computer program or instruction. SQL statements are divided into the following categories:
Data Definition Language (DDL) Statements
These statements create, alter, maintain, and drop schema objects. DDL statements also include statements that permit a user to grant other users the privileges to access the database and specific objects within the database.
Data Manipulation Language (DML) Statements
These statements manipulate data. For example, querying, inserting, updating, and deleting rows of a table are all DML operations. The most common SQL statement is the SELECT statement, which retrieves data from the database. Locking a table or view and examining the execution plan of a SQL statement are also DML operations.
Transaction Control Statements
These statements manage the changes made by DML statements. They enable a user to group changes into logical transactions. Examples include COMMIT, ROLLBACK, and SAVEPOINT.
Session Control Statements
These statements let a user control the properties of the current session, including enabling and disabling roles and changing language settings. The two session control statements are ALTER SESSION and SET ROLE.
System Control Statements
These statements change the properties of the Oracle database instance. The only system control statement is ALTER SYSTEM. It lets users change settings, such as the minimum number of shared servers, kill a session, and perform other tasks.
Embedded SQL Statements
These statements incorporate DDL, DML, and transaction control statements in a procedural language program, such as those used with the Oracle precompilers. Examples include OPEN, CLOSE, FETCH, and EXECUTE.


Overview of PL/SQL

PL/SQL is Oracle's procedural language extension to SQL. PL/SQL combines the ease and flexibility of SQL with the procedural functionality of a structured programming language, such as IF ... THEN, WHILE, and LOOP.
When designing a database application, consider the following advantages of using stored PL/SQL:
  • PL/SQL code can be stored centrally in a database. Network traffic between applications and the database is reduced, so application and system performance increases. Even when PL/SQL is not stored in the database, applications can send blocks of PL/SQL to the database rather than individual SQL statements, thereby reducing network traffic.
  • Data access can be controlled by stored PL/SQL code. In this case, PL/SQL users can access data only as intended by application developers, unless another access route is granted.
  • PL/SQL blocks can be sent by an application to a database, running complex operations without excessive network traffic.
  • Oracle supports PL/SQL Server Pages, so your application logic can be invoked directly from your Web pages.
The following sections describe the PL/SQL program units that can be defined and stored centrally in a database.

PL/SQL Program Units

Program units are stored procedures, functions, packages, triggers, and autonomous transactions.
Procedures and functions are sets of SQL and PL/SQL statements grouped together as a unit to solve a specific problem or to perform a set of related tasks. They are created and stored in compiled form in the database and can be run by a user or a database application.
Procedures and functions are identical, except that functions always return a single value to the user. Procedures do not return values.
Packages encapsulate and store related procedures, functions, variables, and other constructs together as a unit in the database. They offer increased functionality (for example, global package variables can be declared and used by any procedure in the package). They also improve performance (for example, all objects of the package are parsed, compiled, and loaded into memory once).

Overview of Java

Java is an object-oriented programming language efficient for application-level programs. Oracle provides all types of JDBC drivers and enhances database access from Java applications. Java Stored Procedures are portable and secure in terms of access control, and allow non-Java and legacy applications to transparently invoke Java.

Overview of Application Programming Languages (APIs)

Oracle Database developers have a choice of languages for developing applications—C, C++, Java, COBOL, PL/SQL, and Visual Basic. The entire functionality of the database is available in all the languages. All language-specific standards are supported. Developers can choose the languages in which they are most proficient or one that is most suitable for a specific task. For example an application might use Java on the server side to create dynamic Web pages, PL/SQL to implement stored procedures in the database, and C++ to implement computationally intensive logic in the middle tier.
The Oracle Call Interface (OCI) is a C data access API for Oracle Database. It supports the entire Oracle Database feature set. Many data access APIs, such as OCCI, ODBC, Oracle JDBC Type2 drivers, and so on, are built on top of OCI. OCI provides powerful functionality to build high performance, secure, scalable, and fault-tolerant applications. OCI is also used within the server for the data access needs of database kernel components, along with distributed database access. OCI lets an application developer use C function calls to access the Oracle data server and control all phases of business logic execution. OCI is exposed as a library of standard database access and retrieval functions in the form of a dynamic runtime library that can be linked in by the application.
The Oracle C++ Call Interface (OCCI) is a C++ API that lets you use the object-oriented features, native classes, and methods of the C++ programing language to access the Oracle database. The OCCI interface is modeled on the JDBC interface. OCCI is built on top of OCI and provides the power and performance of OCI using an object-oriented paradigm.
Open database connectivity (ODBC), is a database access API that lets you connect to a database and then prepare and run SQL statements against the database. In conjunction with an ODBC driver, an application can access any data source including data stored in spreadsheets, like Excel.
Oracle offers a variety of data access methods from COM-based programming languages, such as Visual Basic and Active Server Pages. These include Oracle Objects for OLE (OO40) and the Oracle Provider for OLE DB. Oracle also provides .NET data access support through the Oracle Data Provider for .NET. Oracle also support OLE DB .NET and ODBC .NET.
Oracle also provides the Pro* series of precompilers, which allow you to embed SQL and PL/SQL in your C, C++, or COBOL applications.

Overview of Transactions

A transaction is a logical unit of work that comprises one or more SQL statements run by a single user. According to the ANSI/ISO SQL standard, with which Oracle is compatible, a transaction begins with the user's first executable SQL statement. A transaction ends when it is explicitly committed or rolled back by that user.

Transactions let users guarantee consistent changes to data, as long as the SQL statements within a transaction are grouped logically. A transaction should consist of all of the necessary parts for one logical unit of work—no more and no less. Data in all referenced tables are in a consistent state before the transaction begins and after it ends. Transactions should consist of only the SQL statements that make one consistent change to the data.
Consider a banking database. When a bank customer transfers money from a savings account to a checking account, the transaction can consist of three separate operations: decrease the savings account, increase the checking account, and record the transaction in the transaction journal.
The transfer of funds (the transaction) includes increasing one account (one SQL statement), decreasing another account (one SQL statement), and recording the transaction in the journal (one SQL statement). All actions should either fail or succeed together; the credit should not be committed without the debit. Other nonrelated actions, such as a new deposit to one account, should not be included in the transfer of funds transaction. Such statements should be in other transactions.
Oracle must guarantee that all three SQL statements are performed to maintain the accounts in proper balance. When something prevents one of the statements in the transaction from running (such as a hardware failure), then the other statements of the transaction must be undone. This is called rolling back. If an error occurs in making any of the updates, then no updates are made.

Commit and Undo Transactions

The changes made by the SQL statements that constitute a transaction can be either committed or rolled back. After a transaction is committed or rolled back, the next transaction begins with the next SQL statement.
To commit a transaction makes permanent the changes resulting from all DML statements in the transaction. The changes made by the SQL statements of a transaction become visible to any other user's statements whose execution starts after the transaction is committed.
To undo a transaction retracts any of the changes resulting from the SQL statements in the transaction. After a transaction is rolled back, the affected data is left unchanged, as if the SQL statements in the transaction were never run.

Savepoints

Savepoints divide a long transaction with many SQL statements into smaller parts. With savepoints, you can arbitrarily mark your work at any point within a long transaction. This gives you the option of later rolling back all work performed from the current point in the transaction to a declared savepoint within the transaction.







Overview of Importants Features

Overview of High Availability Features

Computing environments configured to provide nearly full-time availability are known as high availability systems. Such systems typically have redundant hardware and software that makes the system available despite failures. Well-designed high availability systems avoid having single points-of-failure.
When failures occur, the fail over process moves processing performed by the failed component to the backup component. This process remasters systemwide resources, recovers partial or failed transactions, and restores the system to normal, preferably within a matter of microseconds. The more transparent that fail over is to users, the higher the availability of the system.
Oracle has a number of products and features that provide high availability in cases of unplanned downtime or planned downtime. These include Fast-Start Fault Recovery, Real Application Clusters, Recovery Manager (RMAN), backup and recovery solutions, Oracle Flashback, partitioning, Oracle Data Guard, LogMiner, multiplexed redo log files, online reorganization. These can be used in various combinations to meet specific high availability needs.


Overview of Business Intelligence Features

This section describes several business intelligence features.

Data Warehousing

A data warehouse is a relational database designed for query and analysis rather than for transaction processing. It usually contains historical data derived from transaction data, but it can include data from other sources. It separates analysis workload from transaction workload and enables an organization to consolidate data from several sources.
In addition to a relational database, a data warehouse environment includes an extraction, transportation, transformation, and loading (ETL) solution, an online analytical processing (OLAP) engine, client analysis tools, and other applications that manage the process of gathering data and delivering it to business users.

Extraction, Transformation, and Loading (ETL)

You must load your data warehouse regularly so that it can serve its purpose of facilitating business analysis. To do this, data from one or more operational systems must be extracted and copied into the warehouse. The process of extracting data from source systems and bringing it into the data warehouse is commonly called ETL, which stands for extraction, transformation, and loading.

Materialized Views

A materialized view provides access to table data by storing the results of a query in a separate schema object. Unlike an ordinary view, which does not take up any storage space or contain any data, a materialized view contains the rows resulting from a query against one or more base tables or views. A materialized view can be stored in the same database as its base tables or in a different database.
Materialized views stored in the same database as their base tables can improve query performance through query rewrites. Query rewrite is a mechanism where Oracle or applications from the end user or database transparently improve query response time, by automatically rewriting the SQL query to use the materialized view instead of accessing the original tables. Query rewrites are particularly useful in a data warehouse environment.

Bitmap Indexes in Data Warehousing

Data warehousing environments typically have large amounts of data and ad hoc queries, but a low level of concurrent database manipulation language (DML) transactions. For such applications, bitmap indexing provides:
  • Reduced response time for large classes of ad hoc queries
  • Reduced storage requirements compared to other indexing techniques
  • Dramatic performance gains even on hardware with a relatively small number of CPUs or a small amount of memory
  • Efficient maintenance during parallel DML and loads
Fully indexing a large table with a traditional B-tree index can be prohibitively expensive in terms of space because the indexes can be several times larger than the data in the table. Bitmap indexes are typically only a fraction of the size of the indexed data in the table.

Table Compression

To reduce disk use and memory use (specifically, the buffer cache), you can store tables and partitioned tables in a compressed format inside the database. This often leads to a better scaleup for read-only operations. Table compression can also speed up query execution. There is, however, a slight cost in CPU overhead.

Parallel Execution

When Oracle runs SQL statements in parallel, multiple processes work together simultaneously to run a single SQL statement. By dividing the work necessary to run a statement among multiple processes, Oracle can run the statement more quickly than if only a single process ran it. This is called parallel execution or parallel processing.
Parallel execution dramatically reduces response time for data-intensive operations on large databases, because statement processing can be split up among many CPUs on a single Oracle system.

Analytic SQL

Oracle has many SQL operations for performing analytic operations in the database. These include ranking, moving averages, cumulative sums, ratio-to-reports, and period-over-period comparisons.

OLAP Capabilities

Application developers can use SQL online analytical processing (OLAP) functions for standard and ad-hoc reporting. For additional analytic functionality, Oracle OLAP provides multidimensional calculations, forecasting, modeling, and what-if scenarios. This enables developers to build sophisticated analytic and planning applications such as sales and marketing analysis, enterprise budgeting and financial analysis, and demand planning systems. Data can be stored in either relational tables or multidimensional objects.
Oracle OLAP provides the query performance and calculation capability previously found only in multidimensional databases to Oracle's relational platform. In addition, it provides a Java OLAP API that is appropriate for the development of internet-ready analytical applications. Unlike other combinations of OLAP and RDBMS technology, Oracle OLAP is not a multidimensional database using bridges to move data from the relational data store to a multidimensional data store. Instead, it is truly an OLAP-enabled relational database. As a result, Oracle provides the benefits of a multidimensional database along with the scalability, accessibility, security, manageability, and high availability of the Oracle database. The Java OLAP API, which is specifically designed for internet-based analytical applications, offers productive data access.

Data Mining

With Oracle Data Mining, data never leaves the database — the data, data preparation, model building, and model scoring results all remain in the database. This enables Oracle to provide an infrastructure for application developers to integrate data mining seamlessly with database applications. Some typical examples of the applications that data mining are used in are call centers, ATMs, ERM, and business planning applications. Data mining functions such as model building, testing, and scoring are provided through a Java API

Partitioning

Partitioning addresses key issues in supporting very large tables and indexes by letting you decompose them into smaller and more manageable pieces called partitions. SQL queries and DML statements do not need to be modified in order to access partitioned tables. However, after partitions are defined, DDL statements can access and manipulate individuals partitions rather than entire tables or indexes. This is how partitioning can simplify the manageability of large database objects. Also, partitioning is entirely transparent to applications.
Partitioning is useful for many different types of applications, particularly applications that manage large volumes of data. OLTP systems often benefit from improvements in manageability and availability, while data warehousing systems benefit from performance and manageability.

Overview of Content Management Features

Oracle includes datatypes to handle all the types of rich Internet content such as relational data, object-relational data, XML, text, audio, video, image, and spatial. These datatypes appear as native types in the database. They can all be queried using SQL. A single SQL statement can include data belonging to any or all of these datatypes.

XML in Oracle

XML, eXtensible Markup Language, is the standard way to identify and describe data on the Web. Oracle XML DB treats XML as a native datatype in the database. Oracle XML DB offers a number of easy ways to create XML documents from relational tables. The result of any SQL query can be automatically converted into an XML document. Oracle also includes a set of utilities, available in Java and C++, to simplify the task of creating XML documents.
Oracle includes five XML developer's kits, or XDKs. Each consists of a standards-based set of components, tools, and utilities. The XDKs are available for Java, C, C++, PL/SQL, and Java Beans.

LOBs

The LOB datatypes BLOB, CLOB, NCLOB, and BFILE enable you to store and manipulate large blocks of unstructured data (such as text, graphic images, video clips, and sound waveforms) in binary or character format. They provide efficient, random, piece-wise access to the data.

Oracle Text

Oracle Text indexes any document or textual content to add fast, accurate retrieval of information. Oracle Text allows text searches to be combined with regular database searches in a single SQL statement. The ability to find documents based on their textual content, metadata, or attributes, makes the Oracle Database the single point of integration for all data management.
The Oracle Text SQL API makes it simple and intuitive for application developers and DBAs to create and maintain Text indexes and run Text searches.

Oracle Ultra Search

Oracle Ultra Search lets you index and search Web sites, database tables, files, mailing lists, Oracle Application Server Portals, and user-defined data sources. As such, you can use Oracle Ultra Search to build different kinds of search applications.

Oracle interMedia

Oracle interMedia provides an array of services to develop and deploy traditional, Web, and wireless applications that include image, audio, and video in an integrated fashion. Multimedia content can be stored and managed directly in Oracle, or Oracle can store and index metadata together with external references that enable efficient access to media content stored outside the database.

Oracle Spatial

Oracle includes built-in spatial features that let you store, index, and manage location content (assets, buildings, roads, land parcels, sales regions, and so on.) and query location relationships using the power of the database. The Oracle Spatial Option adds advanced spatial features such as linear reference support and coordinate systems.


Overview of Security Features

Oracle includes security features that control how a database is accessed and used. For example, security mechanisms:
  • Prevent unauthorized database access
  • Prevent unauthorized access to schema objects
  • Audit user actions
Associated with each database user is a schema by the same name. By default, each database user creates and has access to all objects in the corresponding schema.
Database security can be classified into two categories: system security and data security.
System security includes the mechanisms that control the access and use of the database at the system level. For example, system security includes:
  • Valid user name/password combinations
  • The amount of disk space available to a user's schema objects
  • The resource limits for a user
System security mechanisms check whether a user is authorized to connect to the database, whether database auditing is active, and which system operations a user can perform.
Data security includes the mechanisms that control the access and use of the database at the schema object level. For example, data security includes:
  • Which users have access to a specific schema object and the specific types of actions allowed for each user on the schema object (for example, user SCOTT can issue SELECT and INSERT statements but not DELETE statements using the employees table)
  • The actions, if any, that are audited for each schema object
  • Data encryption to prevent unauthorized users from bypassing Oracle and accessing data

Security Mechanisms

The Oracle database provides discretionary access control, which is a means of restricting access to information based on privileges. The appropriate privilege must be assigned to a user in order for that user to access a schema object. Appropriately privileged users can grant other users privileges at their discretion.
Oracle manages database security using several different facilities:
  • Authentication to validate the identity of the entities using your networks, databases, and applications
  • Authorization processes to limit access and actions, limits that are linked to user's identities and roles.
  • Access restrictions on objects, like tables or rows.
  • Security policies
  • Database auditing


    Overview of Data Integrity and Triggers

    Data must adhere to certain business rules, as determined by the database administrator or application developer. For example, assume that a business rule says that no row in the inventory table can contain a numeric value greater than nine in the sale_discount column. If an INSERT or UPDATE statement attempts to violate this integrity rule, then Oracle must undo the invalid statement and return an error to the application. Oracle provides integrity constraints and database triggers to manage data integrity rules.

    Integrity Constraints

    An integrity constraint is a declarative way to define a business rule for a column of a table. An integrity constraint is a statement about table data that is always true and that follows these rules:
  • If an integrity constraint is created for a table and some existing table data does not satisfy the constraint, then the constraint cannot be enforced.
  • After a constraint is defined, if any of the results of a DML statement violate the integrity constraint, then the statement is rolled back, and an error is returned.
Integrity constraints are defined with a table and are stored as part of the table's definition in the data dictionary, so that all database applications adhere to the same set of rules. When a rule changes, it only needs be changed once at the database level and not many times for each application.
The following integrity constraints are supported by Oracle:
  • NOT NULL: Disallows nulls (empty entries) in a table's column.
  • UNIQUE KEY: Disallows duplicate values in a column or set of columns.
  • PRIMARY KEY: Disallows duplicate values and nulls in a column or set of columns.
  • FOREIGN KEY: Requires each value in a column or set of columns to match a value in a related table's UNIQUE or PRIMARY KEY. FOREIGN KEY integrity constraints also define referential integrity actions that dictate what Oracle should do with dependent data if the data it references is altered.
  • CHECK: Disallows values that do not satisfy the logical expression of the constraint.

Keys

Key is used in the definitions of several types of integrity constraints. A key is the column or set of columns included in the definition of certain types of integrity constraints. Keys describe the relationships between the different tables and columns of a relational database. Individual values in a key are called key values.
The different types of keys include:
  • Primary key: The column or set of columns included in the definition of a table's PRIMARY KEY constraint. A primary key's values uniquely identify the rows in a table. Only one primary key can be defined for each table.
  • Unique key: The column or set of columns included in the definition of a UNIQUE constraint.
  • Foreign key: The column or set of columns included in the definition of a referential integrity constraint.
  • Referenced key: The unique key or primary key of the same or a different table referenced by a foreign key.


    Triggers

    Triggers are procedures written in PL/SQL, Java, or C that run (fire) implicitly whenever a table or view is modified or when some user actions or database system actions occur.
    Triggers supplement the standard capabilities of Oracle to provide a highly customized database management system. For example, a trigger can restrict DML operations against a table to those issued during regular business hours.

     
     

    Overview of Information Integration Features

    A distributed environment is a network of disparate systems that seamlessly communicate with each other. Each system in the distributed environment is called a node. The system to which a user is directly connected is called the local system. Any additional systems accessed by this user are called remote systems. A distributed environment allows applications to access and exchange data from the local and remote systems. All the data can be simultaneously accessed and modified.

    Distributed SQL

    A homogeneous distributed database system is a network of two or more Oracle databases that reside on one or more computers. Distributed SQL enables applications and users to simultaneously access or modify the data in several databases as easily as they access or modify a single database.
    An Oracle distributed database system can be transparent to users, making it appear as though it is a single Oracle database. Companies can use this distributed SQL feature to make all its Oracle databases look like one and thus reduce some of the complexity of the distributed system.
    Oracle uses database links to enable users on one database to access objects in a remote database. A local user can access a link to a remote database without having to be a user on the remote database.
    Location Transparency
    Location transparency occurs when the physical location of data is transparent to the applications and users. For example, a view that joins table data from several databases provides location transparency because the user of the view does not need to know from where the data originates.
    SQL and Transaction Transparency
    Oracle's provides query, update, and transaction transparency. For example, standard SQL statements like SELECT, INSERT, UPDATE, and DELETE work just as they do in a non-distributed database environment. Additionally, applications control transactions using the standard SQL statements COMMIT, SAVEPOINT, and ROLLBACK. Oracle ensures the integrity of data in a distributed transaction using the two-phase commit mechanism.
    Distributed Query Optimization
    Distributed query optimization reduces the amount of data transfer required between sites when a transaction retrieves data from remote tables referenced in a distributed SQL statement.

    Oracle Streams

    Oracle Streams enables the propagation and management of data, transactions, and events in a data stream either within a database, or from one database to another. The stream routes published information to subscribed destinations. As users' needs change, they can simply implement a new capability of Oracle Streams, without sacrificing existing capabilities.
    Oracle Streams provides a set of elements that lets users control what information is put into a stream, how the stream flows or is routed from node to node, what happens to events in the stream as they flow into each node, and how the stream terminates. By specifying the configuration of the elements acting on the stream, a user can address specific requirements, such as message queuing or data replication.
    Capture
    Oracle Streams implicitly and explicitly captures events and places them in the staging area. Database events, such as DML and DDL, are implicitly captured by mining the redo log files. Sophisticated subscription rules can determine what events should be captured.
    Staging
    The staging area is a queue that provides a service to store and manage captured events. Changes to database tables are formatted as logical change records (LCR), and stored in a staging area until subscribers consume them. LCR staging provides a holding area with security, as well as auditing and tracking of LCR data.
    Consumption
    Messages in a staging area are consumed by the apply engine, where changes are applied to a database or consumed by an application. A flexible apply engine allows use of a standard or custom apply function. Support for explicit dequeue lets application developers use Oracle Streams to reliably exchange messages. They can also notify applications of changes to data, by still leveraging the change capture and propagation features of Oracle Streams.
    Message Queuing
    Oracle Streams Advanced Queuing is built on top of the flexible Oracle Streams infrastructure. It provides a unified framework for processing events. Events generated in applications, in workflow, or implicitly captured from redo logs or database triggers can be captured in a queue. These events can be consumed in a variety of ways. They can be automatically applied with a user-defined function or database table operation, can be explicitly dequeued, or a notification can be sent to the consuming application. These events can be transformed at any stage. If the consuming application is on a different database, then the events are automatically propagated to the appropriate database. Operations on these events can be automatically audited, and the history can be retained for the user-specified duration.
    Data Replication
    Replication is the maintenance of database objects in two or more databases. Oracle Streams provides powerful replication features that can be used to keep multiple copies of distributed objects synchronized.
    Oracle Streams automatically determines what information is relevant and shares that information with those who need it. This active sharing of information includes capturing and managing events in the database including data changes with DML and propagating those events to other databases and applications. Data changes can be applied directly to the replica database, or can call a user-defined procedure to perform alternative work at the destination database, for example, populate a staging table used to load a data warehouse.
    Oracle Streams is an open information sharing solution, supporting heterogeneous replication between Oracle and non-Oracle systems. Using a transparent gateway, DML changes initiated at Oracle databases can be applied to non-Oracle platforms.
    Oracle Streams is fully inter-operational with materialized views, or snapshots, which can maintain updatable or read-only, point-in-time copies of data. They can contain a full copy of a table or a defined subset of the rows in the master table that satisfy a value-based selection criterion. There can be multitier materialized views as well, where one materialized view is a subset of another materialized view. Materialized views are periodically updated, or refreshed, from their associated master tables through transactionally consistent batch updates.

    Oracle Transparent Gateways and Generic Connectivity

    Oracle Transparent Gateways and Generic Connectivity extend Oracle distributed features to non-Oracle systems. Oracle can work with non-Oracle data sources, non-Oracle message queuing systems, and non-SQL applications, ensuring interoperability with other vendor's products and technologies.
    They translate third party SQL dialects, data dictionaries, and datatypes into Oracle formats, thus making the non-Oracle data store appear as a remote Oracle database. These technologies enable companies to seamlessly integrate the different systems and provide a consolidated view of the company as a whole.
    Oracle Transparent Gateways and Generic Connectivity can be used for synchronous access, using distributed SQL, and for asynchronous access, using Oracle Streams. Introducing a Transparent Gateway into an Oracle Streams environment enables replication of data from an Oracle database to a non-Oracle database.
    Generic Connectivity is a generic solution, while Oracle Transparent Gateways are tailored solutions, specifically coded for the non-Oracle system.



Overview of Database Backup and Recovery Features

In every database system, the possibility of a system or hardware failure always exists. If a failure occurs and affects the database, then the database must be recovered. The goals after a failure are to ensure that the effects of all committed transactions are reflected in the recovered database and to return to normal operation as quickly as possible while insulating users from problems caused by the failure.
Oracle provides various mechanisms for the following:
  • Database recovery required by different types of failures
  • Flexible recovery operations to suit any situation
  • Availability of data during backup and recovery operations so users of the system can continue to work
     

Types of Failures

Several circumstances can halt the operation of an Oracle database. The most common types of failure are described in the following table.
Failure Description
User error Requires a database to be recovered to a point in time before the error occurred. For example, a user could accidentally drop a table. To enable recovery from user errors and accommodate other unique recovery requirements, Oracle provides exact point-in-time recovery. For example, if a user accidentally drops a table, the database can be recovered to the instant in time before the table was dropped.
Statement failure Occurs when there is a logical failure in the handling of a statement in an Oracle program. When statement failure occurs, any effects of the statement are automatically undone by Oracle and control is returned to the user.
Process failure Results from a failure in a user process accessing Oracle, such as an abnormal disconnection or process termination. The background process PMON automatically detects the failed user process, rolls back the uncommitted transaction of the user process, and releases any resources that the process was using.
Instance failure Occurs when a problem arises that prevents an instance from continuing work. Instance failure can result from a hardware problem such as a power outage, or a software problem such as an operating system failure. When an instance failure occurs, the data in the buffers of the system global area is not written to the datafiles. After an instance failure, Oracle automatically performs instance recovery. If one instance in a RAC environment fails, then another instance recovers the redo for the failed instance. In a single-instance database, or in a RAC database in which all instances fail, Oracle automatically applies all redo when you restart the database.
Media (disk) failure An error can occur when trying to write or read a file on disk that is required to operate the database. A common example is a disk head failure, which causes the loss of all files on a disk drive. Different files can be affected by this type of disk failure, including the datafiles, the redo log files, and the control files. Also, because the database instance cannot continue to function properly, the data in the database buffers of the system global area cannot be permanently written to the datafiles.
A disk failure requires you to restore lost files and then perform media recovery. Unlike instance recovery, media recovery must be initiated by the user. Media recovery updates restored datafiles so the information in them corresponds to the most recent time point before the disk failure, including the committed data in memory that was lost because of the failure.


Oracle provides for complete media recovery from all possible types of hardware failures, including disk failures. Options are provided so that a database can be completely recovered or partially recovered to a specific point in time.
If some datafiles are damaged in a disk failure but most of the database is intact and operational, the database can remain open while the required tablespaces are individually recovered. Therefore, undamaged portions of a database are available for normal use while damaged portions are being recovered.

Structures Used for Recovery

Oracle uses several structures to provide complete recovery from an instance or disk failure: the redo log, undo records, a control file, and database backups.
The Redo Log
The redo log is a set of files that protect altered database data in memory that has not been written to the datafiles. The redo log can consist of the online redo log and the archived redo log.

The online redo log is a set of two or more online redo log files that record all changes made to the database, including uncommitted and committed changes. Redo entries are temporarily stored in redo log buffers of the system global area, and the background process LGWR writes the redo entries sequentially to an online redo log file. LGWR writes redo entries continually, and it also writes a commit record every time a user process commits a transaction.
Optionally, filled online redo files can be manually or automatically archived before being reused, creating archived redo logs. To enable or disable archiving, set the database in one of the following modes:
  • ARCHIVELOG: The filled online redo log files are archived before they are reused in the cycle.
  • NOARCHIVELOG: The filled online redo log files are not archived.
In ARCHIVELOG mode, the database can be completely recovered from both instance and disk failure. The database can also be backed up while it is open and available for use. However, additional administrative operations are required to maintain the archived redo log.
If the database redo log operates in NOARCHIVELOG mode, then the database can be completely recovered from instance failure, but not from disk failure. Also, the database can be backed up only while it is completely closed. Because no archived redo log is created, no extra work is required by the database administrator.
Undo Records
Undo records are stored in undo tablespaces. Oracle uses the undo data for a variety of purposes, including accessing before-images of blocks changed in uncommitted transactions. During database recovery, Oracle applies all changes recorded in the redo log and then uses undo information to roll back any uncommitted transactions.
Control Files
The control files include information about the file structure of the database and the current log sequence number being written by LGWR. During normal recovery procedures, the information in a control file guides the automatic progression of the recovery operation.
Database Backups
Because one or more files can be physically damaged as the result of a disk failure, media recovery requires the restoration of the damaged files from the most recent operating system backup of a database. You can either back up the database files with Recovery Manager (RMAN), or use operating system utilities. RMAN is an Oracle utility that manages backup and recovery operations, creates backups of database files (datafiles, control files, and archived redo log files), and restores or recovers a database from backups.

Overview of Database Backup and Recovery Features

Overview of Database Backup and Recovery Features

In every database system, the possibility of a system or hardware failure always exists. If a failure occurs and affects the database, then the database must be recovered. The goals after a failure are to ensure that the effects of all committed transactions are reflected in the recovered database and to return to normal operation as quickly as possible while insulating users from problems caused by the failure.
Oracle provides various mechanisms for the following:
  • Database recovery required by different types of failures
  • Flexible recovery operations to suit any situation
  • Availability of data during backup and recovery operations so users of the system can continue to work
     

Types of Failures

Several circumstances can halt the operation of an Oracle database. The most common types of failure are described in the following table.
Failure Description
User error Requires a database to be recovered to a point in time before the error occurred. For example, a user could accidentally drop a table. To enable recovery from user errors and accommodate other unique recovery requirements, Oracle provides exact point-in-time recovery. For example, if a user accidentally drops a table, the database can be recovered to the instant in time before the table was dropped.
Statement failure Occurs when there is a logical failure in the handling of a statement in an Oracle program. When statement failure occurs, any effects of the statement are automatically undone by Oracle and control is returned to the user.
Process failure Results from a failure in a user process accessing Oracle, such as an abnormal disconnection or process termination. The background process PMON automatically detects the failed user process, rolls back the uncommitted transaction of the user process, and releases any resources that the process was using.
Instance failure Occurs when a problem arises that prevents an instance from continuing work. Instance failure can result from a hardware problem such as a power outage, or a software problem such as an operating system failure. When an instance failure occurs, the data in the buffers of the system global area is not written to the datafiles. After an instance failure, Oracle automatically performs instance recovery. If one instance in a RAC environment fails, then another instance recovers the redo for the failed instance. In a single-instance database, or in a RAC database in which all instances fail, Oracle automatically applies all redo when you restart the database.
Media (disk) failure An error can occur when trying to write or read a file on disk that is required to operate the database. A common example is a disk head failure, which causes the loss of all files on a disk drive. Different files can be affected by this type of disk failure, including the datafiles, the redo log files, and the control files. Also, because the database instance cannot continue to function properly, the data in the database buffers of the system global area cannot be permanently written to the datafiles.
A disk failure requires you to restore lost files and then perform media recovery. Unlike instance recovery, media recovery must be initiated by the user. Media recovery updates restored datafiles so the information in them corresponds to the most recent time point before the disk failure, including the committed data in memory that was lost because of the failure.


Oracle provides for complete media recovery from all possible types of hardware failures, including disk failures. Options are provided so that a database can be completely recovered or partially recovered to a specific point in time.
If some datafiles are damaged in a disk failure but most of the database is intact and operational, the database can remain open while the required tablespaces are individually recovered. Therefore, undamaged portions of a database are available for normal use while damaged portions are being recovered.

Structures Used for Recovery

Oracle uses several structures to provide complete recovery from an instance or disk failure: the redo log, undo records, a control file, and database backups.
The Redo Log
The redo log is a set of files that protect altered database data in memory that has not been written to the datafiles. The redo log can consist of the online redo log and the archived redo log.

The online redo log is a set of two or more online redo log files that record all changes made to the database, including uncommitted and committed changes. Redo entries are temporarily stored in redo log buffers of the system global area, and the background process LGWR writes the redo entries sequentially to an online redo log file. LGWR writes redo entries continually, and it also writes a commit record every time a user process commits a transaction.
Optionally, filled online redo files can be manually or automatically archived before being reused, creating archived redo logs. To enable or disable archiving, set the database in one of the following modes:
  • ARCHIVELOG: The filled online redo log files are archived before they are reused in the cycle.
  • NOARCHIVELOG: The filled online redo log files are not archived.
In ARCHIVELOG mode, the database can be completely recovered from both instance and disk failure. The database can also be backed up while it is open and available for use. However, additional administrative operations are required to maintain the archived redo log.
If the database redo log operates in NOARCHIVELOG mode, then the database can be completely recovered from instance failure, but not from disk failure. Also, the database can be backed up only while it is completely closed. Because no archived redo log is created, no extra work is required by the database administrator.
Undo Records
Undo records are stored in undo tablespaces. Oracle uses the undo data for a variety of purposes, including accessing before-images of blocks changed in uncommitted transactions. During database recovery, Oracle applies all changes recorded in the redo log and then uses undo information to roll back any uncommitted transactions.
Control Files
The control files include information about the file structure of the database and the current log sequence number being written by LGWR. During normal recovery procedures, the information in a control file guides the automatic progression of the recovery operation.
Database Backups
Because one or more files can be physically damaged as the result of a disk failure, media recovery requires the restoration of the damaged files from the most recent operating system backup of a database. You can either back up the database files with Recovery Manager (RMAN), or use operating system utilities. RMAN is an Oracle utility that manages backup and recovery operations, creates backups of database files (datafiles, control files, and archived redo log files), and restores or recovers a database from backups.

Oracle Database Features

Overview of Scalability and Performance Features


Oracle includes several software mechanisms to fulfill the following important requirements of an information management system:
  • Data concurrency of a multiuser system must be maximized.
  • Data must be read and modified in a consistent fashion. The data a user is viewing or changing is not changed (by other users) until the user is finished with the data.
  • High performance is required for maximum productivity from the many users of the database system.
     
 

Concurrency

A primary concern of a multiuser database management system is how to control concurrency, which is the simultaneous access of the same data by many users. Without adequate concurrency controls, data could be updated or changed improperly, compromising data integrity.
One way to manage data concurrency is to make each user wait for a turn. The goal of a database management system is to reduce that wait so it is either nonexistent or negligible to each user. All data manipulation language statements should proceed with as little interference as possible, and destructive interactions between concurrent transactions must be prevented. Destructive interaction is any interaction that incorrectly updates data or incorrectly alters underlying data structures. Neither performance nor data integrity can be sacrificed.
Oracle resolves such issues by using various types of locks and a multiversion consistency model. These features are based on the concept of a transaction. It is the application designer's responsibility to ensure that transactions fully exploit these concurrency and consistency features.

Read Consistency

Read consistency, as supported by Oracle, does the following:
  • Guarantees that the set of data seen by a statement is consistent with respect to a single point in time and does not change during statement execution (statement-level read consistency)
  • Ensures that readers of database data do not wait for writers or other readers of the same data
  • Ensures that writers of database data do not wait for readers of the same data
  • Ensures that writers only wait for other writers if they attempt to update identical rows in concurrent transactions
The simplest way to think of Oracle's implementation of read consistency is to imagine each user operating a private copy of the database, hence the multiversion consistency model.
Read Consistency, Undo Records, and Transactions
To manage the multiversion consistency model, Oracle must create a read-consistent set of data when a table is queried (read) and simultaneously updated (written). When an update occurs, the original data values changed by the update are recorded in the database undo records. As long as this update remains part of an uncommitted transaction, any user that later queries the modified data views the original data values. Oracle uses current information in the system global area and information in the undo records to construct a read-consistent view of a table's data for a query.
Only when a transaction is committed are the changes of the transaction made permanent. Statements that start after the user's transaction is committed only see the changes made by the committed transaction.
The transaction is key to Oracle's strategy for providing read consistency. This unit of committed (or uncommitted) SQL statements:
  • Dictates the start point for read-consistent views generated on behalf of readers
  • Controls when modified data can be seen by other transactions of the database for reading or updating
Read-Only Transactions
By default, Oracle guarantees statement-level read consistency. The set of data returned by a single query is consistent with respect to a single point in time. However, in some situations, you might also require transaction-level read consistency. This is the ability to run multiple queries within a single transaction, all of which are read-consistent with respect to the same point in time, so that queries in this transaction do not see the effects of intervening committed transactions. If you want to run a number of queries against multiple tables and if you are not doing any updating, you prefer a read-only transaction.

Locking Mechanisms

Oracle also uses locks to control concurrent access to data. When updating information, the data server holds that information with a lock until the update is submitted or committed. Until that happens, no one else can make changes to the locked information. This ensures the data integrity of the system.
Oracle provides unique non-escalating row-level locking. Unlike other data servers that ÒescalateÓ locks to cover entire groups of rows or even the entire table, Oracle always locks only the row of information being updated. Because Oracle includes the locking information with the actual rows themselves, Oracle can lock an unlimited number of rows so users can work concurrently without unnecessary delays.
Automatic Locking
Oracle locking is performed automatically and requires no user action. Implicit locking occurs for SQL statements as necessary, depending on the action requested. Oracle's lock manager automatically locks table data at the row level. By locking table data at the row level, contention for the same data is minimized.
Oracle's lock manager maintains several different types of row locks, depending on what type of operation established the lock. The two general types of locks are exclusive locks and share locks. Only one exclusive lock can be placed on a resource (such as a row or a table); however, many share locks can be placed on a single resource. Both exclusive and share locks always allow queries on the locked resource but prohibit other activity on the resource (such as updates and deletes).
Manual Locking
Under some circumstances, a user might want to override default locking. Oracle allows manual override of automatic locking features at both the row level (by first querying for the rows that will be updated in a subsequent statement) and the table level.

Quiesce Database

Database administrators occasionally need isolation from concurrent non-database administrator actions, that is, isolation from concurrent non-database administrator transactions, queries, or PL/SQL statements. One way to provide such isolation is to shut down the database and reopen it in restricted mode. You could also put the system into quiesced state without disrupting users. In quiesced state, the database administrator can safely perform certain actions whose executions require isolation from concurrent non-DBA users.


Real Application Clusters

Real Application Clusters (RAC) comprises several Oracle instances running on multiple clustered computers, which communicate with each other by means of a so-called interconnect. RAC uses cluster software to access a shared database that resides on shared disk. RAC combines the processing power of these multiple interconnected computers to provide system redundancy, near linear scalability, and high availability. RAC also offers significant advantages for both OLTP and data warehouse systems and all systems and applications can efficiently exploit clustered environments.
You can scale applications in RAC environments to meet increasing data processing demands without changing the application code. As you add resources such as nodes or storage, RAC extends the processing powers of these resources beyond the limits of the individual components.

Portability

Oracle provides unique portability across all major platforms and ensures that your applications run without modification after changing platforms. This is because the Oracle code base is identical across platforms, so you have identical feature functionality across all platforms, for complete application transparency. Because of this portability, you can easily upgrade to a more powerful server as your requirements change.

Overview of Manageability Features

People who administer the operation of an Oracle database system, known as database administrators (DBAs), are responsible for creating Oracle databases, ensuring their smooth operation, and monitoring their use. In addition to the many alerts and advisors Oracle provides, Oracle also offers the following features:

Self-Managing Database

Oracle Database provides a high degree of self-management - automating routine DBA tasks and reducing complexity of space, memory, and resource administration. Oracle self-managing database features include the following: automatic undo management, dynamic memory management, Oracle-managed files, mean time to recover, free space management, multiple block sizes, and Recovery Manager (RMAN).

Oracle Enterprise Manager

Enterprise Manager is a system management tool that provides an integrated solution for centrally managing your heterogeneous environment. Combining a graphical console, Oracle Management Servers, Oracle Intelligent Agents, common services, and administrative tools, Enterprise Manager provides a comprehensive systems management platform for managing Oracle products.
From the client interface, the Enterprise Manager Console, you can perform the following tasks:
  • Administer the complete Oracle environment, including databases, iAS servers, applications, and services
  • Diagnose, modify, and tune multiple databases
  • Schedule tasks on multiple systems at varying time intervals
  • Monitor database conditions throughout the network
  • Administer multiple network nodes and services from many locations
  • Share tasks with other administrators
  • Group related targets together to facilitate administration tasks
  • Launch integrated Oracle and third-party tools
  • Customize the display of an Enterprise Manager administrator

SQL*Plus

SQL*Plus is a tool for entering and running ad-hoc database statements. It lets you run SQL statements and PL/SQL blocks, and perform many additional tasks as well.

Automatic Storage Management

Automatic Storage Management automates and simplifies the layout of datafiles, control files, and log files. Database files are automatically distributed across all available disks, and database storage is rebalanced whenever the storage configuration changes. It provides redundancy through the mirroring of database files, and it improves performance by automatically distributing database files across all available disks. Rebalancing of the database's storage automatically occurs whenever the storage configuration changes.

The Scheduler

To help simplify management tasks, as well as providing a rich set of functionality for complex scheduling needs, Oracle provides a collection of functions and procedures in the DBMS_SCHEDULER package. Collectively, these functions are called the Scheduler, and they are callable from any PL/SQL program.
The Scheduler lets database administrators and application developers control when and where various tasks take place in the database environment. For example, database administrators can schedule and monitor database maintenance jobs such as backups or nightly data warehousing loads and extracts.

Database Resource Manager

Traditionally, the operating systems regulated resource management among the various applications running on a system, including Oracle databases. The Database Resource Manager controls the distribution of resources among various sessions by controlling the execution schedule inside the database. By controlling which sessions run and for how long, the Database Resource Manager can ensure that resource distribution matches the plan directive and hence, the business objectives.

Overview of Oracle Utilities

Oracle's database utilities let you perform the following tasks:
  • High-speed movement of data and metadata from one database to another using Data Pump Export and Import
  • Extract and manipulate complete representations of the metadata for database objects, using the Metadata API
  • Move all or part of the data and metadata for a site from one database to another, using the Data Pump API
  • Load data into Oracle tables from operating system files using SQL*Loader or from external sources using external tables
  • Query redo log files through a SQL interface with LogMiner
  • Perform physical data structure integrity checks on an offline (for example, backup) database or datafile with DBVERIFY.
  • Maintain the internal database identifier (DBID) and the database name (DBNAME) for an operational database, using the DBNEWID utility


    Overview of Data Pump Export and Import

    Oracle Data Pump technology enables very high-speed movement of data and metadata from one database to another. This technology is the basis for Oracle's data movement utilities, Data Pump Export and Data Pump Import.
    Data Pump enables you to specify whether a job should move a subset of the data and metadata. This is done using data filters and metadata filters, which are implemented through Export and Import parameters.

    Data Pump Export

    Data Pump Export (hereinafter referred to as Export for ease of reading) is a utility for unloading data and metadata into a set of operating system files called a dump file set. The dump file set can be moved to another system and loaded by the Data Pump Import utility.
    The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format, which can be read only by Data Pump Import. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.

    Data Pump Import

    Data Pump Import (hereinafter referred to as Import for ease of reading) is a utility for loading an export dump file set into a target system. The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format.
    Import can also be used to load a target database directly from a source database with no intervening files, which allows export and import operations to run concurrently, minimizing total elapsed time. This is known as network import.
    Import also enables you to see all of the SQL DDL that the Import job will be executing, without actually executing the SQL. This is implemented through the Import SQLFILE parameter.
     
     
     

    Overview of the Data Pump API

    The Data Pump API provides a high-speed mechanism to move all or part of the data and metadata for a site from one database to another. To use the Data Pump API, you use the procedures provided in the DBMS_DATAPUMP PL/SQL package. The Data Pump Export and Data Pump Import utilities are based on the Data Pump API.

     

    Overview of the Metadata API

    The Metadata application programming interface (API), provides a means for you to do the following:
    • Retrieve an object's metadata as XML
    • Transform the XML in a variety of ways, including transforming it into SQL DDL
    • Submit the XML to re-create the object extracted by the retrieval
    To use the Metadata API, you use the procedures provided in the DBMS_METADATA PL/SQL package. For the purposes of the Metadata API, every entity in the database is modeled as an object that belongs to an object type. For example, the table scott.emp is an object and its object type is TABLE. When you fetch an object's metadata you must specify the object type.

    Overview of SQL*Loader

    SQL*Loader loads data from external files into tables of an Oracle database. It has a powerful data parsing engine that puts little limitation on the format of the data in the datafile. You can use SQL*Loader to do the following:
    • Load data from multiple datafiles during the same load session.
    • Load data into multiple tables during the same load session.
    • Specify the character set of the data.
    • Selectively load data (you can load records based on the records' values).
    • Manipulate the data before loading it, using SQL functions.
    • Generate unique sequential key values in specified columns.
    • Use the operating system's file system to access the datafiles.
    • Load data from disk, tape, or named pipe.
    • Generate sophisticated error reports, which greatly aids troubleshooting.
    • Load arbitrarily complex object-relational data.
    • Use secondary datafiles for loading LOBs and collections.
    • Use either conventional or direct path loading. While conventional path loading is very flexible, direct path loading provides superior loading performance.
    A typical SQL*Loader session takes as input a control file, which controls the behavior of SQL*Loader, and one or more datafiles. The output of SQL*Loader is an Oracle database (where the data is loaded), a log file, a bad file, and potentially, a discard file.


    Overview of External Tables

    The external tables feature is a complement to existing SQL*Loader functionality. It lets you access data in external sources as if it were in a table in the database. External tables can be written to using the ORACLE_DATAPUMP access driver. Neither data manipulation language (DML) operations nor index creation are allowed on an external table. Therefore, SQL*Loader may be the better choice in data loading situations that require additional indexing of the staging table.
    To use the external tables feature, you must have some knowledge of the file format and record format of the datafiles on your platform. You must also know enough about SQL to be able to create an external table and perform queries against it.

    Overview of LogMiner

    Oracle LogMiner enables you to query redo log files through a SQL interface. All changes made to user data or to the database dictionary are recorded in the Oracle redo log files. Therefore, redo log files contain all the necessary information to perform recovery operations.
    LogMiner functionality is available through a command-line interface or through the Oracle LogMiner Viewer graphical user interface (GUI). The LogMiner Viewer is a part of Oracle Enterprise Manager.
    The following are some of the potential uses for data contained in redo log files:
    • Pinpointing when a logical corruption to a database, such as errors made at the application level, may have begun. This enables you to restore the database to the state it was in just before corruption.
    • Detecting and whenever possible, correcting user error, which is a more likely scenario than logical corruption. User errors include deleting the wrong rows because of incorrect values in a WHERE clause, updating rows with incorrect values, dropping the wrong index, and so forth.
    • Determining what actions you would have to take to perform fine-grained recovery at the transaction level. If you fully understand and take into account existing dependencies, it may be possible to perform a table-based undo operation to roll back a set of changes.
    • Performance tuning and capacity planning through trend analysis. You can determine which tables get the most updates and inserts. That information provides a historical perspective on disk access statistics, which can be used for tuning purposes.
    • Performing post-auditing. The redo log files contain all the information necessary to track any DML and DDL statements run on the database, the order in which they were run, and who executed them.
       

    Overview of DBVERIFY Utility

    DBVERIFY is an external command-line utility that performs a physical data structure integrity check. It can be used on offline or online databases, as well on backup files. You use DBVERIFY primarily when you need to ensure that a backup database (or datafile) is valid before it is restored or as a diagnostic aid when you have encountered data corruption problems.
    Because DBVERIFY can be run against an offline database, integrity checks are significantly faster.
    DBVERIFY checks are limited to cache-managed blocks (that is, data blocks). Because DBVERIFY is only for use with datafiles, it will not work against control files or redo logs.
    There are two command-line interfaces to DBVERIFY. With the first interface, you specify disk blocks of a single datafile for checking. With the second interface, you specify a segment for checking.


    Overview of DBNEWID Utility

    DBNEWID is a database utility that can change the internal, unique database identifier (DBID) and the database name (DBNAME) for an operational database. The DBNEWID utility lets you change any of the following:
    • Only the DBID of a database
    • Only the DBNAME of a database
    • Both the DBNAME and DBID of a database
    Therefore, you can manually create a copy of a database and give it a new DBNAME and DBID by re-creating the control file, and you can register a seed database and a manually copied database together in the same RMAN repository.





     

Overview of Accessing the Database

This section describes Oracle Net Services, as well as how to start up the database.

Network Connections

Oracle Net Services is Oracle's mechanism for interfacing with the communication protocols used by the networks that facilitate distributed processing and distributed databases.
Communication protocols define the way that data is transmitted and received on a network. Oracle Net Services supports communications on all major network protocols, including TCP/IP, HTTP, FTP, and WebDAV.
Using Oracle Net Services, application developers do not need to be concerned with supporting network communications in a database application. If a new protocol is used, then the database administrator makes some minor changes, while the application requires no modifications and continues to function.
Oracle Net, a component of Oracle Net Services, enables a network session from a client application to an Oracle database server. Once a network session is established, Oracle Net acts as the data courier for both the client application and the database server. It establishes and maintains the connection between the client application and database server, as well as exchanges messages between them. Oracle Net can perform these jobs because it is located on each computer in the network.

Starting Up the Database

The three steps to starting an Oracle database and making it available for systemwide use are:
  1. Start an instance.
  2. Mount the database.
  3. Open the database.
A database administrator can perform these steps using the SQL*Plus STARTUP statement or Enterprise Manager. When Oracle starts an instance, it reads the server parameter file (SPFILE) or initialization parameter file to determine the values of initialization parameters. Then, it allocates an SGA and creates background processes.

How Oracle Works

The following example describes the most basic level of operations that Oracle performs. This illustrates an Oracle configuration where the user and associated server process are on separate computers (connected through a network).
  1. An instance has started on the computer running Oracle (often called the host or database server).
  2. A computer running an application (a local computer or client workstation) runs the application in a user process. The client application attempts to establish a connection to the server using the proper Oracle Net Services driver.
  3. The server is running the proper Oracle Net Services driver. The server detects the connection request from the application and creates a dedicated server process on behalf of the user process.
  4. The user runs a SQL statement and commits the transaction. For example, the user changes a name in a row of a table.
  5. The server process receives the statement and checks the shared pool for any shared SQL area that contains a similar SQL statement. If a shared SQL area is found, then the server process checks the user's access privileges to the requested data, and the previously existing shared SQL area is used to process the statement. If not, then a new shared SQL area is allocated for the statement, so it can be parsed and processed.
  6. The server process retrieves any necessary data values from the actual datafile (table) or those stored in the SGA.
  7. The server process modifies data in the system global area. The DBWn process writes modified blocks permanently to disk when doing so is efficient. Because the transaction is committed, the LGWR process immediately records the transaction in the redo log file.
  8. If the transaction is successful, then the server process sends a message across the network to the application. If it is not successful, then an error message is transmitted.
  9. Throughout this entire procedure, the other background processes run, watching for conditions that require intervention. In addition, the database server manages other users' transactions and prevents contention between transactions that request the same data.