[http://www.freelists.org/post/oracle-l/Hot-backup-question,3] > what is the impact on users during a hot backup? When the tablespace is in hot backup mode, the first change to a data block in buffer cache is accompanied by writing the whole before-image block to redo. So in general, DMLs during the hot backup create more redo. But we should not exagerate this increase in redo. It's only the first change to the block when it's in buffer cache that is whole-block-logged (controlled by _log_blocks_during_backup); later changes, unless this block is flushed to disk and brought back to cache again, generate normal redo as usual. You may be able to take advantage of this fact by doing mostly same-block updates during hot backup and using a big buffer cache. It's a common misconception that all block changes during hot backup are logged by writing whole blocks. See more at message 3 at http://groups.google.com/group/comp.databases.oracle.server/browse_frm/thread/e3a2b32fa3c68904 and "Proof! November 15, 2005 - 5am Central time zone" at http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:271815712711 But the most detailed description is on p.95 of Rama Velpuri's "Oracle8 Backup & Recovery Handbook", Oracle Press, 1998. Blame Oracle documentation for this misconception: http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconc1.htm "When a tablespace is in backup mode, each time a block is changed the database writes the before-image of the entire block to the redo stream before modifying the block." That "each time" is gone in 11g documentation: http://download.oracle.com/docs/cd/B28359_01/backup.111/b28270/glossary.htm#BRADV90171 http://download.oracle.com/docs/cd/E11882_01/backup.112/e10642/rcmcncpt.htm#sthref609 > how to overcome this The best is of course don't put tablespace in hot backup mode. Use RMAN to backup. If you have to use a tool other than RMAN, there's no ideal solution. Just reduce DML to the tablespace and backup one tablespace at a time. To borrow this thread, I have some unproved thought on this. Has anybody considered the possibility of split blocks if the file system I/O chunk size (commonly called file system block size) is the same as db_block_size? Various sources always seem to use mismatch of the two sizes as an example to explain split blocks. I think there's still a danger as long as DBWn and OS tool can write and read, respectively, to the same block at the same moment. But it's possible when the two sizes match, the probability of split blocks is reduced. This of course assumes the datafile is on a file system, and the backup tool honors the file system I/O chunk size in unit read. By the way, these two pages http://www.ixora.com.au/tips/block_size.htm http://www.antapex.org/undocumented_init.txt talk about the possibility of turning off _log_blocks_during_backup during backup. If anybody really wants to do it (possibly invalidating support), I would add that after backup, run dbv on the backed-up file to see if you get any influx block. It would be nice to run dbv on a file that is being created (if it could accept input from a pipe). But with all that hassle, it would be easier to use RMAN instead of simulating it. [http://www.itpub.net/thread-1351440-1-1.html] > How does Oracle know a block was already logged into redo as a whole block so this > time it should log as normal? Is there a marker in the datafile block header? The datafile block header should not have it. It's instead a flag of the buffer header in buffer cache, i.e. flag column of x$bh. See http://www.jlcomp.demon.co.uk/buf_flag.html for all bits of the flag. One of the bits indicates that. It won't be hard to do a test to find the exact bit. Also, see Steve Adams' note: http://www.ixora.com.au/q+a/redo.htm "The flags field in the buffer header for each buffer contains an indication of whether the block has been logged. If the flag is set, the block is not logged for subsequent changes. However, if a buffer is reused, and the block is then read back into the cache for further modification, the whole block will be logged once again prior to such modification." > The repeated updates to the same block will generate the same amount of redo as when the > tablespace is not in backup mode. If the block is written to the datafile and brought back into buffer cache again, then the first change to this block will be logged with a whole block again. If you have a big buffer cache and you don't manually checkpoint, then of course the probability this happens is reduced. > In addition, it [putting tablespace in backup mode] freezes the tablespace datafiles header. In backup mode, the datafile's header is not completely frozen. The checkpoint counter is still incremented. I just confirmed it's still so in 11gR2: alter session set events 'immediate trace name file_hdrs level 10'; alter system checkpoint; alter session set events 'immediate trace name file_hdrs level 10'; Compare the two trace files to make sure all "Checkpoint cnt" have gone up. alter tablespace begin backup; alter system checkpoint; alter session set events 'immediate trace name file_hdrs level 10'; Compare the trace files and focus on that tablespace. You'll see the checkpoint SCN and time are frozen, but Checkpoint cnt still goes up.