11 Oracle Utilities

001 This chapter describes Oracle database utilities for data transfer, data maintenance, and database administration.
本章讲述用于进行数据传输,数据维护,数据管理的各种 Oracle 数据库工具。
002 This chapter contains the following topics: 本章包含以下主题:

Introduction to Oracle Utilities

11.1 Oracle 工具简介

004 Oracle's database utilities let you perform the following tasks:
  • High-speed movement of data and metadata from one database to another using Data Pump Export and Import
  • Extract and manipulate complete representations of the metadata for database objects, using the Metadata API
  • Move all or part of the data and metadata for a site from one database to another, using the Data Pump API
  • Load data into Oracle tables from operating system files using SQL*Loader or from external sources using external tables
  • Query redo log files through a SQL interface with LogMiner
  • Perform physical data structure integrity checks on an offline (for example, backup) database or datafile with DBVERIFY.
  • Maintain the internal database identifier (DBID) and the database name (DBNAME) for an operational database, using the DBNEWID utility
用户利用 Oracle 提供的数据库工具可以进行以下工作:
  • 使用 Data Pump 导入导出工具在数据库间快速地迁移数据及元数据
  • 使用元数据 API(Metadata API)获取并操作数据库对象的元数据
  • 使用 Data Pump API 在数据库间迁移部分或全部数据及元数据
  • 使用 SQL*Loader 或外部表(external table)将操作系统文件内存储的数据加载到数据表中
  • 通过 LogMiner,使用 SQL 语句查询重做日志文件(redo log file)
  • 使用 DBVERIFY 工具对脱机的(例如,做备份之用的)数据库或数据文件进行物理结构完整性检查(physical data structure integrity check)
  • 使用 DBNEWID 工具维护一个数据库的数据库标识(database identifier,DBID)及数据库名(database name,DBNAME)

See Also:

Oracle Database Utilities


Oracle Database Utilities

Overview of Data Pump Export and Import

11.2 Data Pump 导入导出概述

007 Oracle Data Pump technology enables very high-speed movement of data and metadata from one database to another. This technology is the basis for Oracle's data movement utilities, Data Pump Export and Data Pump Import.
Oracle Data Pump 技术用于在数据库间高速地迁移数据及元数据。此项技术是 Oracle 的两个数据迁移工具 Data Pump 导入(Data Pump Import)及 Data Pump 导出(Data Pump Export)的基础。
008 Data Pump enables you to specify whether a job should move a subset of the data and metadata. This is done using data filters and metadata filters, which are implemented through Export and Import parameters.
用户在 Data Pump 中可以设定部分或全部地迁移数据及元数据。这个功能是通过数据过滤器(data filter)及元数据过滤器(metadata filter)实现的,这两个过滤器可以在导入及导出工具的参数中进行设定。

Data Pump Export

11.2.1 Data Pump 导出工具

010 Data Pump Export (hereinafter referred to as Export for ease of reading) is a utility for unloading data and metadata into a set of operating system files called a dump file set. The dump file set can be moved to another system and loaded by the Data Pump Import utility.
Data Pump 导出工具(在下文中将简称为导出工具(Export))的功能是将数据库数据及元数据导出到被称为转存文件集(dump file set)的操作系统文件中。转存文件集可以被移动到其他数据库系统,并可通过 Data Pump 导入工具进行加载。
011 The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format, which can be read only by Data Pump Import. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.
转存文件集由一个或多个磁盘文件组成,这些文件中包含了表数据,数据库对象元数据,及控制信息。这些文件是以不公开的二进制格式存储的,只能由 Data Pump 导入工具读取。当进行导入操作时,Data Pump 导入工具将从这些文件中获取数据库对象。

Data Pump Import

11.2.2 Data Pump 导入工具

013 Data Pump Import (hereinafter referred to as Import for ease of reading) is a utility for loading an export dump file set into a target system. The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format.
Data Pump 导入工具(在下文中将简称为导入工具(Import))的功能是将导出的转存文件集(dump file set)加载到目标数据库(target system)里。转存文件集由一个或多个磁盘文件组成,这些文件中包含表数据,数据库对象元数据,及控制信息。这些文件是以不公开的二进制格式存储的。
014 Import can also be used to load a target database directly from a source database with no intervening files, which allows export and import operations to run concurrently, minimizing total elapsed time. This is known as network import.
导入工具(Import)还能够直接从源数据库(source database)向目标数据库进行加载,而无需借助于中间文件(intervening file)。这使导出导入工作能够并行进行,最大限度地缩短了数据迁移所需的时间。这项功能被称为网络导入(network import)。
015 Import also enables you to see all of the SQL DDL that the Import job will be executing, without actually executing the SQL. This is implemented through the Import SQLFILE parameter.
导入工具还能让用户查看导入操作将要执行的 SQL DDL 语句,而无需实际执行这些语句。这个功能是通过导入工具的 SQLFILE 参数实现的。

Overview of the Data Pump API

11.3 Data Pump API 概述

017 The Data Pump API provides a high-speed mechanism to move all or part of the data and metadata for a site from one database to another. To use the Data Pump API, you use the procedures provided in the DBMS_DATAPUMP PL/SQL package. The Data Pump Export and Data Pump Import utilities are based on the Data Pump API.
Data Pump API 能够在数据库间进行部分或全部数据及元数据的迁移。用户可以通过 DBMS_DATAPUMP PL/SQL 包中的存储过程来调用 Data Pump API。Data Pump 导入及导出工具也依赖于 Data Pump API。

See Also:



Overview of the Metadata API

11.4 元数据 API 概述

020 The Metadata application programming interface (API), provides a means for you to do the following:
  • Retrieve an object's metadata as XML
  • Transform the XML in a variety of ways, including transforming it into SQL DDL
  • Submit the XML to re-create the object extracted by the retrieval
用户可以通过元数据 API(Metadata API)进行以下工作:
  • 以 XML 的形式得到数据库对象的元数据
  • 将记录元数据的 XML 转换成多种形式,包括 SQL DDL 语句
  • 将抽取出的元数据 XML 提交回数据库,重建数据库对象
021 To use the Metadata API, you use the procedures provided in the DBMS_METADATA PL/SQL package. For the purposes of the Metadata API, every entity in the database is modeled as an object that belongs to an object type. For example, the table scott.emp is an object and its object type is TABLE. When you fetch an object's metadata you must specify the object type.
用户可以通过 DBMS_METADATA PL/SQL 包中的存储过程来调用 Metadata API。对于 Metadata API 来说,数据库中的每一个实体(entity)都被看作属于某一对象类型(object type)的对象(object)。例如,scott.emp 表被看做对象类型为 TABLE 的对象。当用户获取某对象的元数据时,必须指定其对象类型。

See Also:



Overview of SQL*Loader

11.5 SQL*Loader 概述

024 SQL*Loader loads data from external files into tables of an Oracle database. It has a powerful data parsing engine that puts little limitation on the format of the data in the datafile. You can use SQL*Loader to do the following:
  • Load data from multiple datafiles during the same load session.
  • Load data into multiple tables during the same load session.
  • Specify the character set of the data.
  • Selectively load data (you can load records based on the records' values).
  • Manipulate the data before loading it, using SQL functions.
  • Generate unique sequential key values in specified columns.
  • Use the operating system's file system to access the datafiles.
  • Load data from disk, tape, or named pipe.
  • Generate sophisticated error reports, which greatly aids troubleshooting.
  • Load arbitrarily complex object-relational data.
  • Use secondary datafiles for loading LOBs and collections.
  • Use either conventional or direct path loading. While conventional path loading is very flexible, direct path loading provides superior loading performance.
SQL*Loader 能够将数据从外部文件(external file)导入 Oracle 数据库的表中。此工具拥有强大的数据解析引擎,因此她对数据文件中的数据格式几乎没有限制。用户可以利用 SQL*Loader 进行以下工作:
  • 在同一个加载会话(load session)中从多个数据文件中加载数据。
  • 在同一个加载会话(load session)中向多个表加载数据。
  • 指定数据的字符集(character set)。
  • 有选择性地加载数据(即依据记录值决定是否加载该记录)。
  • 使用 SQL 函数在加载数据的同时对其进行处理。
  • 为特定的列生成唯一的序列键值(unique sequential key value)。
  • 通过操作系统的文件系统访问数据文件。
  • 从磁盘,磁带或命名管道(named pipe)中加载数据。
  • 在加载数据的同时生成详尽的错误报告,协助纠错(troubleshooting)。
  • 加载用户自定义的复杂的对象-关系型(object-relational)数据。
  • 利用辅助数据文件(secondary datafile)加载 LOB 数据及集合(collection)数据。
  • 采用常规通路(conventional path)或直接通路(direct path)进行加载。常规通路加载较灵活,而直接通路能够提供更高的性能。
025 A typical SQL*Loader session takes as input a control file, which controls the behavior of SQL*Loader, and one or more datafiles. The output of SQL*Loader is an Oracle database (where the data is loaded), a log file, a bad file, and potentially, a discard file.
通常一个 SQL*Loader 会话需要一个控制文件(control file)及一个或多个数据文件(data file)作为输入。控制文件的作用是控制 SQL*Loader 的执行。SQL*Loader 的输出为数据库表(数据加载的位置),日志文件(log file),错误数据文件(bad file),及丢弃数据文件(discard file)。

Overview of External Tables

11.6 外部表概述

027 The external tables feature is a complement to existing SQL*Loader functionality. It lets you access data in external sources as if it were in a table in the database. External tables can be written to using the ORACLE_DATAPUMP access driver. Neither data manipulation language (DML) operations nor index creation are allowed on an external table. Therefore, SQL*Loader may be the better choice in data loading situations that require additional indexing of the staging table.
外部表(external table)功能与 SQL*Loader 是互补的。此功能使用户能够像访问数据表一样访问数据库外部数据源的数据。外部表能够采用 ORACLE_DATAPUMP 作为访问驱动(access driver)。用户不能对外部表执行 DML 操作或在其上创建索引。因此当用户所加载的中转表(staging table)上需要建立索引时,SQL*Loader 更为适合。
028 To use the external tables feature, you must have some knowledge of the file format and record format of the datafiles on your platform. You must also know enough about SQL to be able to create an external table and perform queries against it.
为了使用外部表功能,用户必须了解相关系统平台上的数据文件的文件结构(file format)及记录格式(record format)。用户还需掌握创建外部表及对其执行查询的 SQL 语句。

See Also:

"External Tables"



Overview of LogMiner

11.7 LogMiner 概述

031 Oracle LogMiner enables you to query redo log files through a SQL interface. All changes made to user data or to the database dictionary are recorded in the Oracle redo log files. Therefore, redo log files contain all the necessary information to perform recovery operations.
通过 LogMiner,用户可以使用 SQL 语句查询重做日志文件(redo log file)。所有对用户数据及数据库数据字典的修改都会被记录在重做日志文件中。因此重做日志文件中包含执行数据库恢复操作的所有必要信息。
032 LogMiner functionality is available through a command-line interface or through the Oracle LogMiner Viewer graphical user interface (GUI). The LogMiner Viewer is a part of Oracle Enterprise Manager.
用户可以通过命令行界面或 Oracle LogMiner 阅读器(Oracle LogMiner Viewer)的图形界面使用 LogMiner 功能。Oracle LogMiner 阅读器是 Oracle 企业管理器(Oracle Enterprise Manager)的组件。
033 The following are some of the potential uses for data contained in redo log files:
  • Pinpointing when a logical corruption to a database, such as errors made at the application level, may have begun. This enables you to restore the database to the state it was in just before corruption.
  • Detecting and whenever possible, correcting user error, which is a more likely scenario than logical corruption. User errors include deleting the wrong rows because of incorrect values in a WHERE clause, updating rows with incorrect values, dropping the wrong index, and so forth.
  • Determining what actions you would have to take to perform fine-grained recovery at the transaction level. If you fully understand and take into account existing dependencies, it may be possible to perform a table-based undo operation to roll back a set of changes.
  • Performance tuning and capacity planning through trend analysis. You can determine which tables get the most updates and inserts. That information provides a historical perspective on disk access statistics, which can be used for tuning purposes.
  • Performing post-auditing. The redo log files contain all the information necessary to track any DML and DDL statements run on the database, the order in which they were run, and who executed them.
  • 找出数据库中发生的逻辑错误的位置,例如在应用逻辑层(application level)发生的错误。用户可以借助这些信息将数据库恢复到错误发生之前的状态。
  • 监控并(在可能时)更正用户所执行操作中的逻辑错误。常见的用户操作中的逻辑错误有:因为在 WHERE 子句中使用了不正确的值而删除了错误的数据行,将数据行更新为错误的值,错误地移除了索引等。
  • 在进行精细的事务级的数据恢复时,决定执行哪些操作。如果用户了解数据间的相关性,就能够手工地重做或回滚之前对数据表进行一系列操作。
  • 通过对数据操作的趋势分析,协助性能调优(performance tuning)或系统规划(capacity planning)。用户能够知道系统中哪个表上的更新或插入操作最频繁。这些信息反映了磁盘访问的历史情况,可以作为调优的根据。
  • 执行事后监控(post-auditing)。重做日志文件内包含了数据库中执行的全部 DML 及 DDL 操作的记录,及这些操作的执行顺序和执行者。

Overview of DBVERIFY Utility

11.8 DBVERIFY 工具概述

035 DBVERIFY is an external command-line utility that performs a physical data structure integrity check. It can be used on offline or online databases, as well on backup files. You use DBVERIFY primarily when you need to ensure that a backup database (or datafile) is valid before it is restored or as a diagnostic aid when you have encountered data corruption problems.
DBVERIFY 是 Oracle 外部的一个命令行工具,其功能是执行物理数据完整性检查(physical data structure integrity check)。检查的对象可以是联机(online)或脱机(offline)的数据库,及备份文件。当用户在进行恢复操作前需要确认一个备份数据库(或数据文件)是否有效,或对数据损坏(corruption)问题进行诊断时就可以使用 DBVERIFY 工具。
036 Because DBVERIFY can be run against an offline database, integrity checks are significantly faster.
由于 DBVERIFY 工具可以检测脱机的数据库,因此用其进行完整性检查的速度很快。
037 DBVERIFY checks are limited to cache-managed blocks (that is, data blocks). Because DBVERIFY is only for use with datafiles, it will not work against control files or redo logs.
DBVERIFY 工具只能针对通过缓存来管理的块(cache-managed block)(即数据块(data block))。因此DBVERIFY 工具只能检查数据文件(datafile),而不能检查控制文件(control file)和重做日志文件(redo log)。
038 There are two command-line interfaces to DBVERIFY. With the first interface, you specify disk blocks of a single datafile for checking. With the second interface, you specify a segment for checking.
DBVERIFY 工具有两种使用模式。第一种模式可以指定检查某个数据文件。第二种模式可以指定检查某个数据段(segment)。

Overview of DBNEWID Utility

11.9 DBNEWID 工具概述

040 DBNEWID is a database utility that can change the internal, unique database identifier (DBID) and the database name (DBNAME) for an operational database. The DBNEWID utility lets you change any of the following:
  • Only the DBID of a database
  • Only the DBNAME of a database
  • Both the DBNAME and DBID of a database
DBNEWID 工具用于修改一个数据库内部唯一的数据库标识(DBID)及数据库名(DBNAME)。用户可以使用 DBNEWID 工具进行以下操作:
  • 只修改数据库的 DBID
  • 只修改数据库的 DBNAME
  • 同时修改数据库的 DBNAME 和 DBID
041 Therefore, you can manually create a copy of a database and give it a new DBNAME and DBID by re-creating the control file, and you can register a seed database and a manually copied database together in the same RMAN repository.
用户可以复制一个数据库,并使用此工具修改 DBNAME 和 DBID,之后重建控制文件(control file),从而创建一个原数据库的副本。用户可以将这两个数据库注册到同一个 RMAN 的资料库中。

翻译不确定的词汇(格式:黄色背景 )  












1、在 10g 中,外部表上可以应用 DML 或 创建索引。另见 Oracle Utility
[027] Neither data manipulation language (DML) operations nor index creation are allowed on an external table.

[041] Therefore, you can manually create a copy of a database and give it a new DBNAME and DBID by re-creating the control file, and you can register a seed database and a manually copied database together in the same RMAN repository.

translator: zw1840@hotmail.com