(Text in blue added after manuscript submitted for publication)
Introduction
In this article, we will take a deep dive into a set of objects that a DBA commonly uses. The information presented in this article is either missing or inadequately described in the documentation when it refers to these commonly used objects. As a result, there is some confusion and misunderstanding in the Oracle community. In this article, we hope to provide the DBAs and developers an opportunity to revisit their understanding of these objects and use this information during root cause analysis of issues.
The DBA_OBJECTS view
Almost every object in an Oracle database has an entry in DBA_OBJECTS view. We query this view when we want to find out details about an object such as whether a given object is a table or index, when it was created, when a DDL statement last modified it, etc. This article examines this view and a few other related ones, focusing on characteristics that may surprise us. Please note that the "Object" in this article is not related to "object-oriented" database or programming, but rather to the database object such as a table or view.
The columns OWNER and OBJECT_NAME of DBA_OBJECTS are self-explanatory. Although the OBJECT_NAME is documented to be a VARCHAR2 column of length 30 characters (even in the Oracle Database 11g Documentation), since at least Oracle8i, this column has been of data type VARCHAR2 with a length of 128! (Update 2016-02: Documentation for 12c has corrected it.) However, only certain types of objects, DB links for example, are allowed to have names of longer than 30 characters. The SUBOBJECT_NAME column is mainly for listing table or index partitions and subpartitions.
OBJECT_ID and DATA_OBJECT_ID columns hold two different types of information. The OBJECT_ID column is a general data dictionary identifier assigned to every object in the database (a notable exception is DB link). However, the DATA_OBJECT_ID column, whose underlying column DATAOBJ# from the OBJ$ table is "data layer object number", can be considered as the ID for a segment, because only a segment can actually hold data. For example, if DATA_OBJECT_ID is null, it is implied that it must refer to an object not associated with a physical segment, such as view, sequence, procedure, etc. But some object types are less obvious: a domain index whose data exists in its secondary objects; a partitioned table or index whose data is in the individual (sub)partitions; a materialized view whose data is in the associated table, in spite of the word "materialized"; a global temporary table whose data exists in a temporary segment and the GTT itself just serves as a dictionary object to facilitate its usage. And in 11gR2, an unusable index (or partition) and an empty table with segment creation deferred also fall into this category.
Normally, OBJECT_ID and DATA_OBJECT_ID, if the latter is not null, are equal to each other. However, they can differ if the object is a table within a cluster, or after certain operations on the object such as noted below:
An ALTER TABLE MOVE command, even without the use of the TABLESPACE clause or specifying the same tablespace, physically moves the table, based on DBA_EXTENTS.FILE_ID and BLOCK_ID. As a result, since the location of the rows have changed, the ROWID values become invalid with respect to the indexes that contain these ROWID's, which requires the indexes for that table be set to UNUSABLE, unless the table or partition was empty before the move. ALTER TABLE MOVE LOB also changes the lob segment's DATA_OBJECT_ID. However, TRUNCATE TABLE doesn't move the table.
In this case, what value will be assigned to the OBJECT_ID or DATA_OBJECT_ID columns? To keep this simple, assume that the object was not dropped since it was last created. Except in the case of partition-table exchange, when a new DATA_OBJECT_ID is assigned, these columns are set to
SELECT MAX(DATA_OBJECT_ID)+1 FROM DBA_OBJECTS;
If you just create a new table, it will be assigned an OBJECT_ID and DATA_OBJECT_ID of
SELECT GREATEST(MAX(OBJECT_ID), MAX(DATA_OBJECT_ID)) FROM DBA_OBJECTS;
If the table has one index, TRUNCATE will increment DATA_OBJECT_ID by 2 because its index takes the number one below it. If the table does not have an index, TRUNCATE should increment DATA_OBJECT_ID by 1 only. However, these increments will be larger (than the 1's and 2's noted above) if any other object was dropped since the object was last created: Those dropped objects, regardless whether they were purged from the recyclebin, already used some IDs. As a result, these IDs can't be reused, much like a sequence that has already been used. But there's no sequence behind this scheme. (The special object _NEXT_OBJECT plays a role in assigning a new ID. While its OBJ# is always 1, its DATAOBJ# seems to hold the highest number a new OBJECT_ID or DATA_OBJECT_ID may jump to in special cases, such as database bounce. Once that happens, MTIME for _NEXT_OBJECT is modified and DATAOBJ# jumps by 10. This behavior is not affected by flushing shared pool, which flushes the dictionary cache including the dc_object_ids component. Incidentally, if you see contention on _NEXT_OBJECT, you may be creating new objects too frequently.) Note that because exchange of a partition with a table swaps their DATA_OBJECT_ID's, we cannot assume that DATA_OBJECT_ID's always increase.
If the table or its partition is already empty, truncating it does not increment DATA_OBJECT_ID. However, moving an empty table or rebuilding an empty index or its partition still increments DATA_OBJECT_ID, consistent with relocation of their data blocks. In case of an online index rebuild, the DATA_OBJECT_ID could increment by 3 because of "transient" objects created during the rebuild, i.e., the journal table and IOT TOP index.
Because an object has two ID's, when should we use which ID? You should use the DATA_OBJECT_ID when you call the DBMS_ROWID package or query the V$BH or X$BH views, or run oradebug lkdebug -m pkey. In most other cases, however, you should use the OBJECT_ID. Some views, such as V$SEGSTAT, V$SEGMENT_STATISTICS and V$LOGMNR_CONTENTS, refer to both ID's, so there won't be confusion. However, pay special attention to the DBMS_ROWID package or X$BH fixed table, because their object number or OBJ column does not have letter "D" and so it's easy to incorrectly assume they match OBJECT_ID. One oddity with X$BH.OBJ or V$BH.OBJD is that the number could be much larger than max(DATA_OBJECT_ID) in DBA_OBJECTS. For example, 4294967295 is for undo segment header, and multiples of 4 million for global temporary tables or indexes. (Ref: http://jonathanlewis.wordpress.com/2006/11/27/object-ids/) Oracle doesn't record the ID's of these segments in DATA_OBJECT_ID of DBA_OBJECTS because they change easily, and for global temporary tables, there may be multiple segments corresponding to one table.
A less obvious mistake some DBAs make is this: The lock checking script blindly assumes V$LOCK.ID1 matches an object's OBJECT_ID, without qualifying lock type. In fact, only a few types of locks, most notably TM, have the object's ID recorded in V$LOCK.ID1. Check V$LOCK_TYPE (10g and up only) for the meaning of ID1.
Another source of confusion and error is related to the timestamp columns in the DBA_OBJECTS view. These three time stamp related columns are: CREATED, LAST_DDL_TIME, and TIMESTAMP. (A point to note is that TIMESTAMP is not a DATE type, and although old documentation says it's VARCHAR2(20), it's actually always VARCHAR2(19) because of the TO_CHAR format in the view definition.) The CREATED column is populated as soon as the object starts to be created, which may significantly differ from the time creation of a big object is completed. The LAST_DDL_TIME records when a DDL statement last ran against this object. In contrast with the CREATED column, if the DDL (other than CREATE) takes a while to finish, as in case of rebuilding a large index or truncating a large table, LAST_DDL_TIME is the time of the end or close to the end of the DDL. This knowledge may come in handy in some post-crisis root cause analyses. Incidentally, LAST_ANALYZED of DBA_TABLES|INDEXES is approximately the beginning time for ANALYZE but ending time for DBMS_STATS.GATHER_TABLE|INDEX_STATS. (If you test GATHER_TABLE_STATS, set CASCADE to false for obvious reason.)
A DDL is a SQL that modifies data dictionary and commits your transaction implicitly ("implicit" means it's not associated with user commits statistic). However, not all DDLs start with the word "ALTER" (and not all ALTER SQLs are DDLs!) We know that if many SQLs using a particular object, table, view, etc., start to hard parse at the same moment, it is possible that a DDL statement, such as GRANT, ran on the object not long ago. Conversely, if you intentionally need to invalidate your SQLs (and thus force a hard parse) but don't want to indiscriminately flush the whole shared pool, you can run a harmless DDL on the object, e.g., GRANT SELECT ... TO DBA. (In Oracle 10.2.0.4 and up, you can have even finer granularity to flush an individual cursor.)
One surprise with LAST_DDL_TIME is that not all DDLs update this timestamp. It's understandable that it will not be updated by a failed DDL, which nevertheless commits the transaction, or by a DDL that does nothing to a database object, such as ALTER SYSTEM SET ENCRYPTION. But some successfully executed DDLs do not update this column either. The following list, based on tests in Oracle 10gR2 and 11gR1, contains a few commands that do not update LAST_DDL_TIME (this list may not be complete):
To find the time these DDLs last ran, you may have to drill into redo logfiles using LogMiner or other means, because AUDIT does not audit all these commands, e.g. ALTER TRIGGER COMPILE. (You may find a timestamp in X$KGLOB.KGLNATIM, but that's the parent cursor's creation time, not child cursor's last active time.)
Perhaps related to this is the fact that some of the above DDLs also don't invalidate cursors that reference the table (ALTER TABLE SHRINK SPACE, ALTER INDEX COALESCE, FLASHBACK TABLE). In this list, invalidation of cursors when DBMS_STATS.GATHER_TABLE_STATS runs can be controlled by the NO_INVALIDATE option. (By default, NO_INVALIDATE was set to FALSE in Oracle 9i, i.e. cursors are invalidated when DBMS_STATS gathers statistics. In Oracle Database 10g, this default is changed to AUTO, allowing the cursor to be invalidated at a later time. Hence, make sure you use the right value in Oracle Database 10g as required.)
The TIMESTAMP column in the DBA_OBJECTS view is less well known. It is internally known as the "specification timestamp" column (see $ORACLE_HOME/rdbms/admin/sql.bsq or dcore.bsq in 11g). If you try a DDL that does not change the table specification, such as ALTER TABLE MOVE or GRANT, the LAST_DDL_TIME will change to the current time but the TIMESTAMP column remains unchanged. For a view or other dependent objects, a DDL such as ALTER COMPILE would do the same as well. However, change in TIMESTAMP does not necessarily mean the table specification changes to something different: For example, if you ALTER TABLE MODIFY a column to exactly the same current type, TIMESTAMP will still be updated. Also, a change in the TIMESTAMP value generally corresponds to invalidation of other objects that depend on this object. For example, a view will become invalid if the table used in this view has a new TIMESTAMP (but see below for enhancement in 11g). One exception is, in 10gR2 and up, when this object is a partitioned table and the dependent objects reference it directly instead of through synonyms. (Ref)
Beginning with 10g, a PL/SQL stored program does not change LAST_DDL_TIME or TIMESTAMP if you CREATE OR RELPACE with exactly the same code. But any change of code will do so, and in case of a PL/SQL package that uses a global variable, it may cause a problem for a long connecting session that uses the package. If a session executed the package earlier, the first execution in this same session, after the package was modified in a different session, will throw ORA-4068 (existing state of packages has been discarded). This error is relatively harmless except for an unattended batch job, which must catch and handle this exception by re-submitting the SQL (but see complications in Notes 783675.1 and 848337.1), or always preceding the execution with a call to DBMS_SESSION.RESET_PACKAGE or MODIFY_PACKAGE_STATE if business requirement can still be met. Proactive alert for this situation may be achieved by checking X$KGLLK.KGLLKFLG. (Ref) Oracle 11.2.0.2 improved handling of this error; (Ref).
The STATUS column of DBA_OBJECTS, obviously, is only meaningful for the types of objects that can ever become invalid. (The Usage Notes on DBMS_UTILITY.INVALIDATE in "PL/SQL Packages and Types Reference" lists these object types: procedure, function, package, package body, trigger, type, type body, library, view, operator, synonym, java class.) Objects containing physical segments by nature will never become invalid. Although an index can become UNUSABLE (visible in DBA_INDEXES.STATUS), its object status is still valid; the unusability of an index is about invalid row ID's in the index leaf blocks, irrelevant to object dependency tracking. Beginning with Oracle10g, a synonym can become invalid, signifying a translation error. This is a great improvement because it allows DBAs' invalid object monitoring script to catch the error before it causes problems in production. STATUS takes values of 'VALID' and 'INVALID', but 'INVALID' is actually an umbrella value that covers a few others. These values can be seen in sql.bsq or dcore.bsq, including real invalids that can't compile (denoted by a value of 3 in OBJ$.STATUS), and invalids that may be only temporarily so (denoted by a value of 5 in OBJ$.STATUS), etc. For example, in case of materialized views, OBJ$.STATUS of 5 corresponds to 'NEEDS_COMPILE' under DBA_MVIEWS.COMPILE_STATE, while 3 to 'COMPILATION_ERROR'. (However, STATUS of 'INVALID', specifically COMPILE_STATE of 'COMPILATION_ERROR' for materialized views, may be a false alarm due to bugs such as 8320150 and 6402311.)
An object generally becomes invalid when its underlying or referenced object changes its structure or specification. However, that's not an entirely accurate statement. It's better to say that the underlined object has undergone certain DDL. As discussed above, only those DDLs that change the base object's TIMESTAMP invalidate the dependent object(s). A GRANT command is not one of them, and the "softer" DDLs that don't even invalidate cursors will also not invalidate dependent objects. Oracle 11g somewhat relaxes the invalidation rule by introducing fine-grained dependency. For example, if view V does not reference column C of table T, you can alter T to do anything on C without invalidating V. (This feature can be turned off by setting the parameter _IGNORE_FG_DEPS to tables or all, maybe for troubleshooting purposes.)
Normally an invalid object such as a view automatically compiles itself to a valid state when it is subsequently used and no error is thrown. However, sometimes the object requires explicit compilation to become valid. A common case involves an invalid trigger that could under certain conditions throw ORA-4020, a library cache deadlock (not to be confused with ORA-60, an enqueue deadlock). It's always good practice to manually compile an invalid object unless you know for sure it can be auto-compiled when it's used later, because sometimes compilation involves code change, or compilation of other objects which require human intervention one way or another. Synonym compilation is special in that it always changes STATUS to valid even if the target object cannot be accessed. Therefore a better way to compile a synonym is simply to describe it once - if it describes the target successfully, it is validated, and if it fails, it remains invalid. (This built-in side effect of DESCRIBE, i.e. compiling or validating the object, can be turned off by event 10525, probably useful in troubleshooting or research.)
There's no straight SQL to invalidate objects. The PL/SQL procedure DBMS_UTILITY.INVALIDATE since 10gR2 can be used to intentionally invalidate an object. One use of this is to monitor whether an invalidatable object will be used. For example, before dropping a table in production, best practice calls for renaming the table for a few days in case someone still uses it. But before you drop a package, you can't rename it. Invalidating the package and checking its status later can tell you whether the package is automatically compiled by a session executing it. Note that the INVALIDATE procedure does not update LAST_DDL_TIME or TIMESTAMP, although its counterpart, VALIDATE (since 10gR1), does. So does the automatic or implicit compilation when the invalid object is used.
Compilation is independent of whether an object is enabled or disabled (if the object can be enabled or disabled). For example, assuming that recompiling an invalid trigger to valid status automatically enables a disabled trigger could have dangerous consequences.
The TEMPORARY column in the DBA_OBJECTS view indicates whether this object is a global temporary table, its index, or the LOB segment for its LOB column. For a table or index, this is redundant because you can derive this information from the TEMPORARY column of DBA_TABLES and DBA_INDEXES. Note the temporary LOB in this context is not related to that created by DBMS_LOB.CREATETEMPORARY, which is visible in V$TEMPORARY_LOBS.
The GENERATED column indicates whether the name is system generated. If both OBJECT_NAME and SUBOBJECT_NAME have values, GENERATED is about SUBOBJECT_NAME. However, this is not completely reliable. If part of the name is system-generated and part of the name is specified by you, this column could be either 'Y' or 'N'. For example, although names of the secondary indexes created as a result of a new CTXSYS.CONTEXT text index are considered system-generated, none of the secondary objects created as a result of a new CTXSYS.CTXCAT index are. In fact, even a completely generated name may be marked as 'N' - the name of a journal table used during online index rebuild is not considered generated, although the interim IOT TOP index is. The query below will leave a completely system-generated type named like 'SYSTP%==', and yet DBA_OBJECTS does not consider it as generated.
CREATE TYPE MYTYPE AS TABLE OF VARCHAR2(4000); / SELECT CAST(COLLECT(MYCOL) AS MYTYPE) FROM MYTABLE;
However, it is possible that this may be just a side effect of Bug 4033868 where the generated type is not supposed to persist in the first place. (Warning: Sometimes the type won't be cleaned by SMON, which repeatedly throws ORA-21779, due to Bugs 8623930, 8674868. So if you test the query above, drop the type manually.)
The SECONDARY column of DBA_OBJECTS is about secondary objects created when you create a domain index, such as an Oracle text index. The undocumented DBA_SECONDARY_OBJECTS view reveals what the "primary" objects are for these secondary objects. However, that view only lists the DR$ tables for the text index, and the information is not as complete as in DBA_OBJECTS.
So far we've reviewed all documented columns of DBA_OBJECTS. Some columns of OBJ$ are not exposed, such as different types of invalid status, and other attributes of an object. See the description of the internal table OBJ$ in the sql.bsq or dcore.bsq script for details.
Other "Object" related views
Apart from object-oriented Oracle, the word "object" is used in other places in Oracle. The DBA_OBJECT_SIZE view has PL/SQL code size, separated into source, parsed and executable code size. You can measure code size yourself, which is the number of bytes of your code starting from the words such as "PROCEDURE MYPROC AS ...". It only counts one character at each line break even on Windows, the OS where an ASCII text file uses two characters for each line break. The DBA_OBJECT_SIZE view is hardly used. However, to give an accurate answer to How big is a schema?, and if this schema has a huge amount of PL/SQL code but insignificant total segment size, DBA_OBJECT_SIZE should not be ignored.
V$OBJECT_USAGE is one of the very few V$ views that in turn is based on other data dictionary tables, instead of internal X$ fixed tables. (As a result, it has no GV$ version.) The name implies a more ambitious, broader-scale, plan to record usage of various types of objects. However, so far it is only used to monitor index usage after an ALTER INDEX MONITORING command has been run on that index. If you do use index monitoring, unless the columns the index is built on are all null, remember to not collect stats with DBMS_STATS on the index, or on the table or schema with AUTO_CASCADE. That would reset your index usage monitoring and you would have to start over again. (See Bugs 4615996 and 4432354.) Based on its current definition and limitation of performing only index monitoring, a more appropriate name for V$OBJECT_USAGE would be USER_INDEX_USAGE (but not ALL_ or DBA_INDEX_USAGE, because the view is defined to limit owner to userenv('SCHEMAID')). [2019-09 Update: Beginning with Oracle 12cR1, V$OBJECT_USAGE is deprecated in favor of the new view USER_OBJECT_USAGE, exactly as I had predicted. Oracle also has a new view DBA_OBJECT_USAGE, which contains OWNER column. Beginning with Oracle 12cR2, the new view DBA_INDEX_USAGE provides more detailed usage information.]
V$SESSION has four columns recording what object, file, block and row a session is waiting on. However, these columns don't always have the correct numbers. Instead, previous entries may hang around without proper cleanup. The object column, ROW_WAIT_OBJ#, could have a value of -1 ("Holder of Lock" according to Bug 8331864), or a number totally irrelevant to your work. So far I haven't found a bug report associated with this. (See my summary.)
Objects in Oracle have a hierarchy of dependency. This information, recorded in the DEPENDENCY$ table exposed to the DBA_DEPENDENCIES view, is used to invalidate dependent (child) objects when the referenced (parent) object, or only the relevant part of it in 11g, changes its specification. Some DBAs like to compile invalid objects according to the dependency hierarchy instead of simply running utlrp.sql. For example, if view V1 dependes on table T, view V2 on V1, package P on V2, and all dependent objects are invalid, you can compile the end of the dependency chain, package P (or its body), to validate all of the upstream objects.
The dependency between the PL/SQL package and body is special. DBA_DEPENDENCIES shows the body depends on its specification. As expected, when the spec is invalid, the body will also be invalid and when the body is invalid, the spec can remain valid. (You should not have an invalid package spec but valid body, otherwise you may be hitting Bug 6136074.) However, compiling either the body or the spec validates both of them, in contrary to the normal validation rule that only validating the dependent can propagate to the parent.
DBMS_METADATA.GET_DEPENDENT_DDL can track dependents of an object, even an index, which the DBA_DEPENDENCIES view does not track (except for domain indexes). If you ever need to view a list of objects shown in a tree-like hierarchy and don't want to write your own script, DBMS_UTILITY.GET_DEPENDENCY partially serves the purpose, showing the current object and its children, recursively if any, but not its parent. If you run this procedure, it's best to login as SYS. Documentation fails to warn that if you login as the schema owner of the object, you won't see any dependent you're not explicitly granted SELECT privilege on or without public synonym for it. (If you login as another user such as a regular DBA, it's even worse. You have to be explicitly granted SELECT on the object to avoid ORU-10013 ("xxx was not found"); even then you'll miss out on the dependent objects you're not granted SELECT privilege on.) Others may like the script utldtree.sql and its view DEPTREE or IDEPTREE.
A common question is how to find what uses a given sequence. Unfortunately, unless the sequence is used in a PL/SQL stored program such as a trigger, DBA_DEPENDENCIES will not give you the answer. The fact that an application issues INSERT INTO T (PK,...) VALUES (MYSEQ.NEXTVAL,...) does not suggest a dependency relationship between the table and the sequence in which the sequence is either a parent or child. In fact, even if the sequence is used in a PL/SQL program but if it's used in dynamic SQL as EXECUTE IMMEDIATE..., the "dependency" is masked because the SQL statement is treated merely as a string. This "dependency" must be documented outside the database. It was pointed out that Oracle could be enhanced to allow sequencename.nextval as column default value. If that happens, this desired dependency can be recorded in Oracle, along with easier coding and improved performance in most cases. (Update 2015-02: Test in 12.1.0.2 shows that a table with a column using a sequence-based default value does not show dependency between the table and the sequence in DBA_DEPENDENCIES.)
The V$OBJECT_DEPENDENCY view is for the library cache as DBA_DEPENDENCIES is for the data dictionary. The column names for V$OBJECT_DEPENDENCY suggest a dependency flow from children back to parent, because FROM_xxx indicate cursors or PL/SQL objects and TO_xxx refer to the objects that the PL/SQL objects or cursors use or depend on. Except in the case where the PL/SQL object code is wrapped, the information in this view can be derived by reading the cursor or PL/SQL code to see what tables or views it references, or checking V$SQL (combined with DBA_SOURCE) to see what code uses a given table. A more frequently asked question is this: Given a SQL statement, how do we find the PL/SQL that calls this SQL? From 10g on, the PROGRAM_ID and PROGRAM_LINE# columns of V$SQL point to the stored PL/SQL object and the line in the code. 11g V$SESSION provides PLSQL_xxx columns, which may also help under certain circumstances. However, if it's a PL/SQL anonymous block, or your database version is older than 10g, or the cursor has partially aged out, only X$KGLRD retains this dependency information. (Ref) (A reviewer of this article asks: Would not queries on V$OBJECT_DEPENDENCY hold the library cache and shared pool latch for long periods on large, fragmented shared pools? If so, we need to place a warning! I'm not aware of this issue and can't find a bug report. The SQL of the view has an indexed join on hash column (kglnahsh) between x$kglob and x$kgldp, although in 9i the index is not used for some reason, even with a RULE hint.)
Conclusion
In this article, we have taken an uncommon look at some common attributes of Oracle objects through exploration of a few data dictionary views, primarily DBA_OBJECTS. Hopefully it helps you understand Oracle's various types of objects better, write DBA scripts more efficiently, and helps in troubleshooting vexing issues!
July 2009 (created)
February 2016 (modified)
Microsoft Word version (July 2009)
To my Computer Page
To my OraNotes Page