Selected messages posted to news:comp.databases.oracle.server. Some of my words are edited and so are different from originally posted to the newsgroup. ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Redo block size... Date: 2004-01-25 14:43:06 PST > > > How do I find out the chunk size in which redo information is written > > > to disk? > > > > Oracle 9.2.0.3.0 on RedHat 8.0. > > Extfs3 filesystems, 4KB for datafile volumes. > > > > SQL> select distinct block_size from v$archived_log; > > > > BLOCK_SIZE > > ---------- > > 512 > Is the same for me. Ok, I guess it's kinda hard to distribute that across > two disks. In case you're not running in archivelog mode, you can still check the redo logfile block size with $ORACLE_HOME/bin/dbfsize: $ dbfsize redo02.log Database file: redo02.log Database file type: raw device Database file size: 819200 512 byte blocks $ uname -a SunOS 5.8 Generic_108528-10 sun4u sparc SUNW,Ultra-80 In addition to block size, it also tells you if the file is filesystem-based. In my case, it's using Veritas Quick I/O but reported as raw. Most OSes use 512 bytes as logfile block size. But HPUX uses 1k, probably DEV_BSIZE defined in /usr/include/sys/param.h. This size is actually not filesystem dependent. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Redo block size... Date: 2004-01-26 09:48:49 PST "Noons" wrote in message news:<401471f0$3$19709$afc38c87@news.optusnet.com.au>... > "Yong Huang" wrote in message > news:b3cb12d6.0401251443.e217f7b@posting.google.com... > > > filesystem-based. In my case, it's using Veritas Quick I/O but > > reported as raw. > > That would be because the partition is managed by Veritas itself > from a Unix raw partition? Precisely. > > Most OSes use 512 bytes as logfile block size. But HPUX uses 1k, > > probably DEV_BSIZE defined in /usr/include/sys/param.h. This size is > > actually not filesystem dependent. > > Actually, if the redo log file is in a file system, write size is > totally dependent on it. If you request a write of 512 bytes in > a file system of 4K block size, the OS *will* write 4K. > Like it or not... > > And the main reason for the usual recomendation to keep redo > log in raw. We both got it right. Oracle's logfile block size for most OSes remains at 512 bytes as shown by dbfsize or x$kccle.lebsz or v$archived_log.block_size, regardless whether it's filesystem-based or not. But if it's on a fs, a partial block write is handled by fs as a full block read, filled with redo entries, and then a full block write. Hence the raw for logfile recommendation. For performance reason, I can't imagine anybody recommends otherwise. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Oracle Doc Error Newsgroups: comp.databases.oracle.server Date: 2004-01-25 11:32:58 PST danielroy10junk@hotmail.com (Daniel Roy) wrote in message news:<3722db.0401211957.eb6c17d@posting.google.com>... > ..., Oracle states > that "If you need to drop the current group, first force a log switch > to occur.". This looked fishy to me, so I decided to test it: > ... > SQL> alter database drop logfile group 2; > alter database drop logfile group 2 > * > ERROR at line 1: > ORA-01624: log 2 needed for crash recovery of thread 1 > ORA-00312: online log 2 thread 1: > 'C:\ORA9201\ORADATA\OEMREP\REDO02.LOG' > > > SQL> alter system checkpoint; > > System altered. > > SQL> alter database drop logfile group 2; > > Database altered. > > As you can see, it seems to me that before being able to drop a log > group, a log switch must of course have occurred. But a checkpoint up > to at least the end of the log group must also have occurred! Is my > counter-example correct, or I'm missing something? Hi, Daniel, The documenation is indeed not detailed enough. Further, SQL Reference says "When you force a log switch, Oracle begins to perform a checkpoint but returns control to you immediately rather than when the checkpoint is complete." The confusion is due to the special "log switch checkpoint". There're many types of checkpoints; incidentally, I believe Rama Velpuri's book talks about them all. Metalink Note:76713.1 explains that a log switch checkpoint is a zero priority "checkpoint" (quotes are mine), and in most cases doesn't really do the checkpoint as we understand. If you set log_checkpoints_to_alert = true (can be done with alter system), tail -f alert.log, and switch logfile, you'll see Sun Jan 25 11:00:58 2004 Beginning log switch checkpoint up to RBA [0xe.2.10], SCN: 0x0000.002a5767 Thread 1 advanced to log sequence 14 Current log# 1 seq# 14 mem# 0: C:\ORACLE\ORADATA\ORCL\REDO01.LOG Note there's no Completed lines. Then alter system checkpoint, and you'll see Sun Jan 25 11:01:47 2004 Beginning global checkpoint up to RBA [0xe.3e.10], SCN: 0x0000.002a57c6 Completed checkpoint up to RBA [0xe.3e.10], SCN: 0x0000.002a57c6 Completed checkpoint up to RBA [0xe.2.10], SCN: 0x0000.002a5767 Note that the SCN on the last line is for the so-called log switch checkpoint, and it matches the SCN beginning at 11:00:58. The real checkpoint has a later SCN than that "fake" log switch checkpoint SCN. But for completeness, Oracle is nice enough to show the last line so we can have a match with its Beginning line. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Online Cursors Date: 2003-12-26 10:47:46 PST "Ahmed Baraka" wrote in message news:... > Hi there, > How can I dispaly number of cursors currently running in Oracle 9. Hi, Ahmed, The SQLs (cursors) that are being executed are shown in v$sql(area) where users_executing > 0. I wouldn't use v$open_cursor as the other person suggested since it may contain cursors not being executed at the moment. Yong Huang ******************************************************************************** From: yong321@yahoo.com (Yong Huang) Subject: Re: How to detect which session is being traced? Date: 2003-12-07 14:37:49 PST "Tanel Poder" wrote in message news:<3fd1b724$1_1@news.estpak.ee>... > > > Is there any way I can detect what session is being traced? Also, may I ... > dbms_support.stop_trace_in_session). AFAIK, there is no easy way to check > sql_trace's value for other sessions (maybe with oradebug dumpvar). There is > a dbms_system.read_ev procedure with which you can check the level for > sql_trace (event 10046), but this works only for current session. Not sure what var to dump in dumpvar. But oradebug dump events would do it. Here's a test on 8.1.7 on Linux. In your regular user session, alter session set events '604 trace name errorstack'. (I choose error 604 so the trace file is not immediately created; for sql_trace or event 10046, you may simply look at the trace files in udump to find out without following the procedure shown below) In your sysdba session oradebug setospid oradebug dump events 1 (According to Jonathan Lewis in Metalink Note:189389.999, 1 is for session, 2 for process and 4 for system). Then you see in the trace file for the first session, Evt#(b10) 604, TR Name ERRORSTACK and level 1. So if you want to find what events are set by your user sessions, simply loop through each session (where type='USER' in v$session), run oradebug dump events 1 on them. Yong Huang and "zhu chao" ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: How do I ensure order of rows loaded by SQL Loader? Date: 2003-11-13 06:07:13 PST puneet.bansal@wipro.com (Puneet) wrote in message news:<4c276d80.0311122127.562b3607@posting.google.com>... > Hi everybody ! > I am loading a CSV file into Oracle 8i table using SQL Loader > utility. The data is in such a format that the order of the records is > of paramount importance. But when I retrieve the data from the table > it can be in any order, isn't it? > > My doubts are: > Does SQL Loader always load the records in the same order as that of > the data file? > > If yes then how can I ensure that when I retrive the records they are > always in the same order as the one specified in CSV file? (The table > is just a temp table and it doesn't have any primary key or any column > I can order by.) Will 'order by rowid' solve the problem? Hi, Puneet, SQL*Loader reads one line at a time in its datafile in that order. But what's loaded in a regular table is unordered. That's why a regular table is also called a heap table. If a later-loaded row can be fit in a hole in that heap, it may go there. Using direct-path load should help. But even then Oracle can't guarantee the data retrieved serially will be shown in the order as in the SQL*Loader datafile. As others suggested, you need a sequence number to order the data in Oracle. You wouldn't think you had this sequence in your CSV. But it *is* in there, implicitly by definition of a text file you read sequentially. Using 'order by rowid' won't help either. Think of what's in a rowid and the concept of a heap table. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: explain plan from pl/sql? Date: 2003-11-07 07:06:26 PST "mcstock" wrote in message news:... > ... > and the reason is that, although EXPLAIN PLAN is typically issues via > SQL*Plus, it is not a SQL*Plus command, but a legal SQL statement -- but > since it is not a DML statement, it is not directly supported by PL/SQL Actually, EXPLAIN PLAN is not a DDL, because it doesn't modify data dictionary, and it doesn't commit your transaction. So I guess we can only say it *is* a DML. EXECUTE IMMEDIATE works for DML and queries too. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: How do you relate high I/Os waits to a specific file/process ? Date: 2003-09-30 21:12:29 PST spendius@muchomail.com (Spendius) wrote in message news:... > Hi, > I'm often called and told "once again the server > suffers from high io waits activity, what the hell > is going on with Oracle again..." etc etc., which > I can verify with 'top', 'sar' or certain options of > the 'ps' command as well (I'm talking about io waits > reaching 40% and more in the '%wio' column of sar for > example). > I'd like to know whether it's possible to link this > IO activity and slowliness with > -precise files, and/or > -precise processes > (in order to see, of course, if they're Oracle related). > > It's very easy to link high CPU consumption with a process, > but not so with io activity, so maybe you can help me please ? Hi, Spendius, To identify the heavy I/O Oracle sessions or processes, v$sess_io in Oracle can tell you physical reads. You need to correlate the delta (difference) between two consecutive runs of select physical_reads from v$sess_io for each session. But this view does not record physical writes. So you may want to look at v$sesstat for "physical reads" and "physical writes" statistics, again watching the delta between two runs for the same sessions. If you just need a quick view of what OS processes consume the most I/O and you happen to use Solaris, try my topio command: topio -d. See http://yong321.freeshell.org/freeware/pio.html for details. To identify the Oracle files experiencing the most I/O, look at v$filestat and v$tempstat (or the undocumented and "pre-joined" v$filestatxs and v$tempstatxs for part of the info). To identify both processes and files, simply look at v$session_wait where event like 'db file%' or 'control file%' or 'log file%'. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Which process spawns a dedicated server process? Date: 2003-09-27 21:12:35 PST "Michael J. Moore" wrote in message news:... > ------------------------------ > >> Which process spawns a dedicated server process? > >> Is it the listener process, or some other Oracle process. Also, on a Unix > >> system, when you do "ps -ef" to see your processes, the PPID points back to > >> a process named "init". Why does the PPID not point to some other process > >> like, for example, the listener, or PMON, or whoever spawned it. > >> thanks, > >> Mike --------------------------------- > > The processes are spawned by the OS, repesented by the process called > > init. Talk to you UNIX admin about how this works. > > > > Then my question back to you arises: why do you care? > > > > If you are the DBA, then you need to hit the manuals some more. > > If you are not the DBA, you should not care. > > > > (I suspect ayou are not the DBA but you feel to need to control this > > process somehow. If you have some problem of performance or other, > > post that and suggestions will come. Looking at process IDs will get > > you nowhere in understanding the ORACLE operations and architecture.) > > > > HTH, > > ed > ------------------------------ > Ed, I have hit the manuals resulting in the very need to ask this question. > > According to OCP 9i Database Fundamentals 1 Exam Guide: > > On page 12 it says: "... the listener tells Oracle to generate a new > dedicated server ..." yet on page 14 in the questions and answers section it > says "The listener spawns a new server process." > > So, which is it, "listener tell Oracle" or "listener spawns"? Or now you > have given me a third option "The processes are spawned by the OS". Hi, Michael, Your question is quite legitimate. On UNIX, when you connect to the database in dedicated configuration through SQL*Net, the listener spawns a child process, immediately forks, the parent exits, and the child exec's $ORACLE_HOME/bin/oracle. The server process thus created (or called shadow process as if it were a shadow of your client process) behaves like a UNIX daemon. The reason a server process has a parent pid of 1 is that it's a child of a parent who exited. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Can you explain the indication of "command_type" and "open_versions" column in the v$sqlarea Date: 2003-09-17 06:35:30 PST "ÐÜìÇ" wrote in message news:... > What is the meaning of "command_type" and "open_versions" column in the > v$sqlarea? what does it meanings where command_type=2 and open_versions >0. Hi, xiongye, The reference manual for v$sqlarea may not be that straightforward. But you can find the command type in the notes for v$session ("Table 3-3 COMMAND Column of V$SESSION and Corresponding Commands" in my 9i documentation). Open versions count in v$sqlarea is the number of open child cursors. To understand child vs. parent cursors, read some of Steve Adams' articles at www.ixora.com.au, or James Morle's Scaling Oracle8i. Here's my understanding. When the cursor is in library cache, even just the parent or cursor head, it'll show up in v$sql and v$sqlarea (version_count in the latter is 1 or greater). When the cursor's context or cursor tail is also loaded (not just the parent or head), loaded_versions in both views is incremented. Further, if the cursor is open (not just head and tail present in the library cache), open_versions is incremented. Finally, when it's being executed, users_executing is non-zero. Note that loaded_versions and open_versions in v$sql are just 1 and 0; it would be better if they're called loaded and open for the boolean status. In v$sqlarea they're actual counts. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Help with interpreting TKPROF output Date: 2003-08-29 12:53:18 PST tim.kearsley@milton-keynes.gov.uk (Tim Kearsley) wrote in message news:<725736ef.0308290009.1f736509@posting.google.com>... > > Below is a sample of some TKPROF output: >... > I'm OK with most of this, but the exact meaning of the "disk", "query" > and "current" columns puzzles me. My understanding is that the "disk" > column refers to the number of blocks physically read from disk, the > "query" column refers to the number of buffers retrieved in consistent > mode, and the "current" column refers to the number of buffers read in > curent mode. The bit I don't understand is why the query, which does > no inserts, updates etc., should get buffers in anything but > consistent mode. Hi, Tim, The disk column gives the number of blocks physically read (physical as far as Oracle sees it; could be cached by the filesystem), not the number of disk reads, although the difference is only meaningful in case of db block multiblock reading (see http://yong321.freeshell.org/computer/diskreadunit.txt). I won't conclude but I think the buffer reads ("query" and "current" columns) use the unit of number of buffers, not number of times of reading buffers. To answer your second question, even a query needs current mode gets, because the recursive SQLs read the data dictionary. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Error : Using DBMS_ALERT Date: 2003-08-30 12:45:11 PST wonim.somaggio@lusis.fr (Petit Donghwa) wrote in message news:... > My goal is to notify automatically when ths table is inserted on my > web_page written in PHP. (PHP and Oracle9) > When I open this web_page it must run infinitely untile to to another > web_page URL. > So I made this function with PL/SQL function to use DBMS_ALERT. ... > Could you help me Please what is a problem?? > > ********************************************************************** > 1. table XYZ > create table XYZ > ( > DATA1 integer; > ); > > > 2. Trigger for this table XYZ > CREATE OR REPLACE TRIGGER XYZ_trig > after insert on XYZ > for each row > begin > dbms_alert.signal('xyz_alert', null); > END XYZ; > / > > > 3. crete function to signal to my PHP page to notify this table is > changed. > create or replace function change_xyz_f > return number > declare > alert_msg varchar2(20); > alert_status number := 0; > begin > DBMS_ALERT.REGISTER('xyz_insert'); > DBMS_ALERT.WAITONE('xyz_insert', alert_msg, alert_status, 300); > > if alert_status =1 > then > return 1; > else > return 0; > end if; > DBMS_ALERT.REMOVE('xyz_insert'); > end change_xyz_f; > / > > 4. on my web-page written in PHP > $connection_oracle = OCILogon("xxxxx","xxxxxx"); > $statement_oracle = OCIParse($connection_oracle, "BEGIN :s := > change_xyz_f(); END;"); > $statement_oracle = OCIParse($connection_oracle, $signal); > > OCIBindByName($statement_oracle, ":s", $s, 32); > OCIExecute($statement_oracle, OCI_DEFAULT); > OCIFreeStatement($statement_oracle); > OCILogOff($connection_oracle); > > if ( $s == 1) > echo "Table is changed!"; > else > echo "Table is Not changed!"; > > I make this PL/SQL function to use DBMS_ALERT. > But many errors. There may be several issues: 1. Your event in the function is 'xyz_insert' but the table trigger signals 'xyz_alert'. Just a typo? 2. You may already know. DBMS_ALERT sends the message out on commit. If you want it to be sent even on rollback after insert, the trigger needs an autonomous transaction. 3. DBMS_ALERT receives the last message you signaled if you have multiple sent. So FOR EACH ROW in the trigger probably won't work as expected. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: cursors-related oracle parameters Date: 2003-08-28 12:16:27 PST "Syltrem" wrote in message news:... > I would like to find a way to best calculate a good value for > session_cached_cursors. I thought setting it to a value that would be enough > to serve most client sessions, should be fine (i.e. if most clients have > less than 100 cursors open, set it to 100). Hi, Syltrem, The statistics "session cursor cache count" and "session cursor cache hits" in v$sesstat can be used to tune session_cached_cursors. Read the description in the Reference manual. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: 9i R2 database vs. 9i R2 Forms Developer PL/SQL compatibility Date: 2003-08-21 08:52:21 PST Harald Maier wrote in message news:... > > For associative array functionality (it's a documented feature in the 9i R2 > > PL/SQL reference) for key/value pairs: > > > > DECLARE > > TYPE CUST_ID_TYPE IS TABLE OF NUMBER INDEX BY VARCHAR(30); > > CUST_ID CUST_ID_TYPE; > > ID NUMBER; > > BEGIN > > CUST_ID('FLINTSONE') := 1; > > CUST_ID('RUBBLE') := 7; > > ID := CUST_ID('FLINTSTONE'); > > ... > > END; > > Especially, perl programer like this feature :). > Harald Associative arrays are also natively supported in awk, Tcl (and therefore Expect), KornShell newer than 11/98 version, Python and Visual Basic (called dictionaries in both), Java (Hashtables). PL/SQL finally catches up with other languages in this functionality. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Deadlock on Parallel Update Date: 2003-08-15 07:40:01 PST "Bosco Ng" wrote in message news:<3f3b9e8b$1@shknews01>... > I got a night time batch process that needs to update on some of the rows of > some million-row scale tables. > > Ok, due to time constraint, we now plan to split the process into 4 > processes and each of the 4 processes will update a table TABLE_A based on > the SID, which is the unique key. > > So basically: > > Process 1: UPDATE TABLE_A SET COL_A = :b1 WHERE SID = :b2 > Process 2: UPDATE TABLE_A SET COL_A = :b1 WHERE SID = :b2 > Process 3: UPDATE TABLE_A SET COL_A = :b1 WHERE SID = :b2 > Process 4: UPDATE TABLE_A SET COL_A = :b1 WHERE SID = :b2 > > And I am pretty sure that the SID allocated to each processes are unique and > will not overlap, so they are all updating different rows. > > But we are experiencing deadlocks here. > > After investigation, my hypothesis is that on the ITL list of the data > block. > > I happened to know that TABLE_A is set up with INITRANS = 1 and PCTFREE = 0. > So there is no room for the the block hander to expend the ITL when needed, > so if it happened that any of the 2 processes try to update a data block > together (because those 2 SID happened to reside in the same block) there > will be a dead lock. > > Am my hypothesis RIGHT? > > If I am right, should I rebuild the table using say INITTRANS = 4, would > that be enough? And should I change the PCT_FREE setting? Hi, Bosco, Others have good input. I just want to add that in your parallel DML, your INITRANS has to be at least 4. I.e., if you set PCTFREE to a value such that ITLs could automatically grow to 4, leaving INITRANS at 1 may result in ORA-12829 (Deadlock - itls occupied by siblings at block %s of file %s). This is one of two cases you can't rely on a non-zero PCTFREE to grow ITLs (see Steve Adams' book p.49). Indexes need a little higher ITLs. Try setting INITRANS to at least 5 if your table INITRANS is 4. Unless your Oracle version is pre-8i, you can ALTER TABLE MOVE to rebuild the table with new physical attributes. MOVE doesn't mean you have to move; it really means rebuild. Existing blocks are also rebuilt. Remember to REBUILD the indexes after that. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Checkpoint duration Date: 2003-08-07 22:28:54 PST Chuck wrote in message news:... > Statspack will tell you how many checkpoints you completed between two > snapshots. Apart, from logging checkpoints to the alert log, is there a way > to find out how long your checkpoints are taking? If your checkpoint often lasts long (> 5 seconds), you may look at the time_waited and average_wait columns of v$system_event where event = 'checkpoint completed'. If checkpoint always finishes quickly, I can't think of any alternative. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: UTL_SMTP hangs while sending Date: 2003-08-04 21:23:45 PST vslabs@onwe.co.za (Billy Verreynne) wrote in message news:<1a75df45.0308041101.4240b54b@posting.google.com>... > md@burning.de (Marc) wrote > > > We are using Oracle 8.1.7.4.2. > > we got the following problem: > > It may happen, that the vpn-connection between database server and > > mail-server wents down for a second, while a job is sending (using > > utl_smtp.write_text). If this happens, the job "hangs" - control is > > not given back. > > > Seeing that we do not have the UTL_SMTP source... Marc, this sounds > like a blocking call problem.. which is likely how UTL_SMTP is > implemented? > > And yes, Oracle will not raise an exception as there is nothing to > raise. The IP stack has the execution pointer and it is stuck. While > Oracle waits for blocking call to complete. > > I hate blocking calls in sockets programming. It is a pain as you > release control of a call to the IP stack. For example, you do a > send() and send the RCPT TO data. Next you do a recv() to get the ack > from the SMTP server. Only, something goes wrong and there is nothing > to recv().. ever.. until something gets killed or kicked. > > UTL_TCP itself can be used without having to run into such a blocking > call problem (e.g. you check if there's data in the recv() buffer > before doing a recv() and reading the exact number of bytes in the > buffer). > > Anyway IMO, if UTL_SMTP has been implemented using "default blocking" > calls, there is _nothing_ you can do to fix it... but to rewrite it. Billy, your explanation sounds quite reasonable. The only thing I want to say is that when you say IP stack, you really sound like saying TCP stack. To Marc, how about setting tcp_keepalive_interval (or tcp_keepalive_time depending on OS) to a lower value? Most OSes set it to 2 hours (I think it's some standard body recommended minimum). Tru64 has it at 4 hours; Linux maybe 3 hours. When your problem happens, can you not bounce the database and wait for your OS-specific tcp keep alive time to see if you get a timeout? Unfortunately, even if this can reduce the timeout, it's a system-wide setting. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: monitoring checkpoints Date: 2003-08-01 07:21:07 PST "Tanel Poder" wrote in message news:<3f2a39cf_1@news.estpak.ee>... > > One way would be to: > > select checkpoint_change#, checkpoint_time, checkpoint_count from > v$datafile_header; Another way is to set log_checkpoints_to_alert = true in init.ora. You'll see this info in alert.log. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Log writter excessive memory usage. Date: 2003-07-26 21:03:50 PST "Tanel Poder" wrote in message news:<3f21bcee$1_1@news.estpak.ee>... > > First thing that comes into my mind is why have you configured only 4G of > swap with 32G of memory? > I'm not sure about AIX, but AFAIK standard Unixes like Solaris and HP-UX do > want to map whole memory to pagefile as well, so if you want real paging to > occur, you should actually have even more space allocated. > > I don't believe it's changed in recent versions... > > Btw. linux is different, it handles virtual memory differently & doesn't > want to match physical memory with swap space. (On Windows I don't know how > it works and do not care much either..) Solaris 2 (SunOS5) and up does not require physical swap space size matching or exceeding real memory. I doubt current AIX or HPUX still requires that. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: High Version count in V$SQLAREA Date: 2003-07-07 11:46:07 PST srivenu@hotmail.com (srivenu) wrote in message news:<1a68177.0307070450.410a4be1@posting.google.com>... > The following statements are coming from a client application. > Now my question is why are the version counts high for the above > statements even though they are using bind variables and no synonyms. Hi, Srivenu, James Morle's Scaling Oracle8i pp.278-9 gives the answer. In short, there're two cases where the same SQL has two versions (called two child cursors in Oracle documentation; I think James calls them two bodies of the same cursor head). One is when the bind threshold differs too much. Say :1 in the first version allows actual values of length 1 to 60 bytes, while in the second version :1 can accept values of length 61 and 200. The second case is when the optimizer plan changes. It can happen under various conditions. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: V$LOCK.TYPE = 'TO' --> What's this? Date: 2003-06-17 11:10:15 PST "Syltrem" wrote in message news:... > I have TO type locks in my db, and can't find what those are. > The object is a table and it's an exclusive row-lock > > The article "V$LOCK" Reference Note" on Metalink does not show a type TO Hi, Syltrem, TO locks are locks on global temporary tables. v$lock.id1 is dba_objects.object_id. For instance, in my Apps 11i database, I currently have more than 200 TO locks all placed on applsys.MO_GLOB_ORG_ACCESS_TMP. They correspond to the rows in v$sort_usage where segtype = 'DATA'. In my case, the lock mode is 3 (row exclusive). Because this lock name happens to be a common English word, it's hard to search either on Metalink or Google. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: monitor temporary locally managed tablespace? Date: 2003-06-11 21:44:39 PST ewong@mail.com (Ed Wong) wrote in message news:... > I used to have a script to join a few dba_ views to monitor the size > of all tablespaces. Since I switch my temp tablespace to temporary > locally managed tablespace using tempfile(instead of datafile) my > script does not show temp tablespace anymore. It looks to me > dba_free_space doesn't contain temporary lm tablespace. Look at v$temp_space_header. The columns are bytes_free and blocks_free. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: How to export from Oracle to text using Perl Date: 2003-06-11 11:21:20 PST tamarr@atomica.com (Tamar) wrote in message news:<7282f42a.0306110449.14a5cb06@posting.google.com>... > We are using Oracle 8.1.7 and were surprised to find that there is no > utility to export entire tables to text files. I searched around and > found that most people suggested using SQLPlus for this, but I find > that its relatively slow and really not efficient with space, since > you have to define the line size in advance. Can you give us your benchmark result? If SQLPlus has trimspool on, colsep set to a single character, arraysize set to 100 or so, pagesize 0..., I doubt it's slower than your Perl code. You can also avoid sqlplus spitting the result to the screen by setting termout off and making sure your SQL is in a script (rather than directly typed at SQL> prompt). So you type set term off trims on colsep '`' array 100 pages 0 spo myoutput @myscript.sql Redirection also works, even under DOS (The "select * from dual" is shown AFTER I "blindly" typed that and hit enter. I could suppress all these with sqlplus set commands): C:\>sqlplus yong/yong@tiny > myoutput select * from dual; exit C:\>type myoutput SQL*Plus: Release 9.0.1.0.1 - Production on Wed Jun 11 22:54:08 2003 (c) Copyright 2001 Oracle Corporation. All rights reserved. Connected to: Oracle9i Enterprise Edition Release 9.0.1.3.0 - Production With the Partitioning option JServer Release 9.0.1.3.0 - Production SQL> D - X SQL> Disconnected from Oracle9i Enterprise Edition Release 9.0.1.3.0 - Productio n With the Partitioning option JServer Release 9.0.1.3.0 - Production Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: How To know who is using the DBLINK Date: 2003-06-11 10:16:21 PST madhusudana.reddy@bestbuy.com (Madhu) wrote in message news:<9dd6101c.0306100824.52d0251a@posting.google.com>... > Hello All, > I have two DBLINKS created for users 'SYS' and 'SYSTEM' and i am in > the process of changing the passwords for both the users. Is there any > way to know who is using the DBLINKs created for the users 'SYS' and > 'SYSTEM' ?? If anybody is currently using the link, he may be holding a DX lock shown in v$lock. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: ORA-20002: 501 Bad address syntax Date: 2003-04-29 08:13:46 PST yong321@yahoo.com (Yong Huang) wrote in message news:... > schumacherfan2003@yahoo.com (leo) wrote in message news:<79d24144.0304232051.688b8818@posting.google.com>... > > > > I intend to send a simple smtp mail using the builtin UTL_SMTP package. > > > > But i get the following error:- > > ORA-20002: 501 Bad syntax error > > > > Here is my code:- > > CREATE OR REPLACE PROCEDURE mail > > IS > > BEGIN > > DECLARE > > v_connection UTL_SMTP.CONNECTION; > > > > BEGIN v_connection := UTL_SMTP.OPEN_CONNECTION(,25); > > dbms_output.put_line('Connection Opened'); > > > > UTL_SMTP.HELO(v_connection,); > > dbms_output.put_line('After calling helo'); > > > > UTL_SMTP.MAIL(v_connection,'test@myhost.com'); > > dbms_output.put_line('Sender set'); > > > > UTL_SMTP.RCPT(v_connection,'test1@myhost.com'); > > dbms_output.put_line('Recipient Set'); > > > > UTL_SMTP.DATA(v_connection,'Sent From PL/SQL'); > > dbms_output.put_line('Message body set'); > > > > UTL_SMTP.QUIT(v_connection); > > dbms_output.put_line('Connection Closed'); > > end; > > END; > > / > > > > Here is the output:- > > > > SQL> exec mail > > Connection Opened > > After calling helo > > BEGIN mail; END; > > > > * > > ERROR at line 1: > > ORA-20002: 501 Bad address syntax > > ORA-06512: at "SYS.UTL_SMTP", line 86 > > ORA-06512: at "SYS.UTL_SMTP", line 204 > > ORA-06512: at "ADMIN.MAIL", line 13 > > ORA-06512: at line 1 > > I've seen this on some SMTP servers. For instance, Yahoo mail server > is configured this way to disallow a simple MAIL FROM: > myaddress@mydomain.com: > > $ telnet mx1.mail.yahoo.com 25 > Trying 64.157.4.84... > Connected to mx1.mail.yahoo.com. > Escape character is '^]'. > 220 YSmtp mta576.mail.yahoo.com ESMTP service ready > helo yong321@yahoo.com > 250 mta576.mail.yahoo.com > mail from:yong321@yahoo.com > 501 Syntax error in parameters or arguments > > But other mail servers allow this (I changed my company to > ): > > $ telnet mx.nyc.untd.com 25 #netzero.com mail server > Trying 64.136.20.83... > Connected to mx.nyc.untd.com. > Escape character is '^]'. > 220 mx11.nyc.untd.com ESMTP > helo yong321@yahoo.com > 250 mx11.nyc.untd.com Hello .com pleased to meet you > mail from:yong321@yahoo.com > 250 yong321@yahoo.com... Sender OK > qwit > 221 mx11.nyc.untd.com Closing connection > Connection closed by foreign host. I think I know what happened to my test on Yahoo mail server. It needs angle brackets in the "mail from" command: $ telnet mx1.mail.yahoo.com 25 Trying 64.157.4.82... Connected to mx1.mail.yahoo.com. Escape character is '^]'. 220 YSmtp mta424.mail.yahoo.com ESMTP service ready helo yong321@yahoo.com 250 mta424.mail.yahoo.com mail from: 250 sender ok quit 221 mta424.mail.yahoo.com Connection closed by foreign host. Using angle brackets also works on netzero's mail server. So always use it. In fact, several Google messages say that *is* the correct form according to the RFC. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Help: Is this sql from Package or not? Date: 2003-04-28 15:25:11 PST Karsten Farrell wrote in message news:... > "charlie cs" said... > > Guys, > > > > Is there any way to detect if a java application is hitting the db using a > > stored proc vs. a > > straight sql statement? We'd like to identify what sql we are getting that > > is not in a stored proc. so we can change the Java code. > > > > Any suggestions will be highly appreciated > > > You could add functionality to the stored procedure to write an audit > record to another table whenever it's called. You could write a trigger > against the table(s) that writes an audit record, including the user who > fired the trigger. That may be the most reliable way, short of looking at v$sql where address is the currently running Java session's sql_address. Other than that, you may simply select sql_text from v$sql and look at the text. Code from stored procedures (packages, functions) look a little different from straight SQL. For instance, this is my procedure: SQL> create or replace procedure p as begin 2 delete from qwerty; 3 end; 4 / But after I execute it, v$sql shows "DELETE from qwerty". Straight SQLs will show as original text strings in v$sql. In addition to changing letter case, the PL/SQL engine might also change literals to bind variables in the where clause, adding one more space between some words, and so on. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: how much memory does a dedicated connection take up Date: 2003-03-26 10:55:58 PST andreyNSPAM@bookexchange.net (NetComrade) wrote in message news:<3e7b7f29.2175381693@nyc.news.speakeasy.net>... > Has anybody ever calculated this properly on solaris 2.7 (64bit) > ps -ef|grep LOC > oracle 16182 1 0 14:50:35 ? 0:11 oracleBSBL (LOCAL=NO) > > An article here suggests: > http://www.akadia.com/services/solaris_tips.html > (total minus shared), but I am having a hard time to believe that my > connections take up 30 Megs. (sort_area_size is 32K) > Trying to move away from MTS (just tired with it), but don't know if I > can allow thousands of dedicated connections on the machine > (application doesn't support connection pooling) > Or should I just look at pga in v$sessstat? (for this session) > > session pga memory > 748140 > session pga memory max > 748140 > > Looks like it's kind of close to heap (as suggested by Tom Kyte) > http://asktom.oracle.com/pls/ask/f?p=4950:8:166696416062103318::NO::F4950_P8_DISPLAYID,F4950_P8_CRITERIA:267215452482, > > /usr/proc/bin/pmap 16182 > 16182: oracleBSBL (LOCAL=NO) > 00010000 26496K read/exec > /u31/app/oracle/product/817/bin/oracle > 019FE000 264K read/write/exec > /u31/app/oracle/product/817/bin/oracle > 01A40000 976K read/write/exec [ heap ] > 80000000 742744K read/write/exec/shared [ shmid=0xb59 ] > FEA82000 8K read/write/exec [ anon ] [snipped] > FEB9E000 8K read/write/exec [ anon ] > FEC00000 4056K read/exec > /u31/app/oracle/product/817/lib/libjox8.so > FF004000 160K read/write/exec > /u31/app/oracle/product/817/lib/libjox8.so > FF02C000 8K read/write/exec [ anon ] > FF090000 16K read/exec > /usr/platform/sun4u/lib/libc_psr.so.1 > FF0A0000 8K read/write/exec [ anon ] > FF0B0000 16K read/exec /usr/lib/libmp.so.2 > FF0C2000 8K read/write/exec /usr/lib/libmp.so.2 [snipped] > FF3DE000 16K read/write/exec /usr/lib/ld.so.1 > FFBE0000 64K read/write/exec [ stack ] > total 776776K > ....... > We use Oracle 8.1.7.4 on Solaris 2.7 boxes The 30M from total 776776K minus the shared memory segment 742744K is not right. All memory segments of permission not showing "write" should also be substracted. So the biggest missing chunk is obviously the oracle binary itself. Even common sense says the image of this binary should be shared by all oracle processes. But "session pga memory" in v$sesstat is still larger than even just the heap segment in pmap -x. PGA must be a very restricted concept, not equivalent to the private memory used by an OS process. In pmap output, stack is absolutely private. Shared libraries have private portion. Even the oracle executable has a little bit (presumably data and bss segments in it). All these contribute to the memory usage of an oracle process. BTW, pmap is wrong in reporting some part of memory as private when it's shared (most notably the shared memory segment!). This bug still exists in Solaris 8 pmap. You need the correct version from www.solarisinternals.com. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: what's the meaning of "heuristic" Date: 2003-03-21 11:26:36 PST wrote in message news:... > the 'heuristic' is often oracle tunnelling. > what s meaning Not sure what you mean by tunneling. "heurisitc" as a computer "jargon" usually refers to those things computers alone don't do a good job at. A few days ago somebody here asked a question about character set recognition. That's partly a human task. An Oracle-related case: how do you unambiguously identify a specific user's session in v$session? You may ask whether the session is launched from his desktop and he only launched one session so you can identify by osuser column. If he says it's from Citix Metaframe using a generic account such as APPS, you can consider logon_time combined with v$sql where address=, and so on. This is heuristic. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Dictionary Cache Size Formula ??? Date: 2003-03-21 10:25:53 PST ibrahim.dogan@lowes.com (Ibrahim DOGAN) wrote in message news:<33634314.0303200720.8cd967e@posting.google.com>... > We have very low dict. cache hit ratio, around 80%. > > Dict cache size is 6M and shared pool is 237MB. > > I have no clue about the internal formula Oracle applies to 'shared > pool' to set dict cache size but even if we double the shared pool, > dict cache size doesn't budge much (7-8M) and we still get very low > dict cache hit ratio? > > Any workaround to directly increase the size of dict cache? Ibrahim, there's no way to directly size the dictionary cache. If your application has to access a lot of objects, you have to increase shared_pool_size to increase dictionary cache (so not wasting time blindly searching in library cache becomes more important, i.e. avoid literals in SQLs, avoid public synonyms etc). Generally low recursive calls corresponds to low dictionary cache miss. But I don't think there's a way to avoid those recursive calls given certain applications. You may also consider using locally managed tablespaces. That might actually lower your dictionary cache hit ratio, but it also lowers dictionary cache hits. So the ratio becomes less meaningful and you gain by giving more memory to other parts of shared pool, mostly library cache. Some people believe increasing _row_cache_cursors reduces dictionary cache miss. Steve Adams's notes clearly tell us that's not so. Yong Huang ******************************************************************************** From: yong321@yahoo.com (Yong Huang) Subject: Re: Urgent: how I can see which character set the file is used Kalle wrote in message news:<3E66DBA4.E8A7F35F@jippii.fi>... > Hi all, > > I have an urgent problem, how could I see which character set is used in > certain flat file on UNIX level. > > I need to know this in order to setup the value correctly into the > control file... > > Any ideas would be apreciated > > Rgds, > Kalle Hi, Kalle, One tool is David Necas (Yeti)'s Enca (http://trific.ath.cx/software/enca/). But he told me it only works well in detecting "encodings for Belarussian, Czech, Polish, Russian, Slovak and Ukrainian, and the few multibyte encodings it detects are Unicode variants like UTF-8". It has Linux and BSD binaries but I built it on Solaris without problems. He also points me to GNU Recode (http://www.gnu.org/directory/recode.html) and Saka's Chinese encoding AutoConvert (http://banyan.dlut.edu.cn/~ygh). I haven't tried either of these. Since this work is a fundamental weak point of computers, you may consider posting part of the file to a language-specific newsgroup (or even some under soc.culture hierarchy). I have a table that helps people identify languages: http://yong321.freeshell.org/misc/language.html. But that doesn't directly tell you what character set you should use. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Restrict TOAD from accessing a DB? Date: 2003-03-05 15:20:38 PST navaed7024@hotmail.com (E. Navarro) wrote in message news:... > Oracle EE 8.1.7.4 > Sun OS 5.7 > > Does anyone know of a solution to prevent users from logging into a > database with certain applications. In my case, I want to prevent users > from logging into our production databases using TOAD. > > I can monitor who is logging in and how they are logging in using OEM, > but what I would like is to be alerted the second someone tries to log > in using TOAD. Can this be done? > > TIA, > Daniel N. > Oracle DBA You can check for v$session for module = 'T.O.A.D', or if the user hasn't renamed the toad program name, program = 'TOAD.exe', and kill the session. But it's trivial to even change the binary toad.exe to hide the 'T.O.A.D' string. Open it with a binary file editor, or even a text editor that is binary-file friendly such as (g)vim. Find the string "begin sys.dbms_application_info.set_module('T.O.A.D.', null); end;" and replace 'T.O.A.D' with any 7 character long string. In a nutshell, there's no absolute way to prevent it. You may be better off blocking connections from any terminal other than designated ones using $TNS_ADMIN/protocol.ora (or .protocol.ora if it's before 8.1.6.2). If you wish to use product_user_profile table to restrict connections from third party tools such as Toad, it's not possible. That table can only block sqlplus connection. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: General question on index compression and performance "Jan Gelbrich" wrote in message news:... > > I would like to know if index compression can have *negative* performance > impacts to applications. Generally, if you have fast CPUs and slow disks, compress indexes if you can. If you have slow CPUs and fast disks, don't do it. I think the reason is obvious. How many CPUs do you have and how fast are they? What kind of disks (rpm?) and what's the RAID level? Veritas or UFS... Yong Huang [See Jonathan Lewis's FAQ at www.jlcomp.demon.co.uk/faq/compress_ind.html] ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: too much references to old archive log in v$history_log Date: 2003-02-25 11:27:22 PST drak0nian@yahoo.com (Paul Drake) wrote in message news:<1ac7c7b3.0302241602.b3eb877@posting.google.com>... > The number of files kept in the log history is most likely kept in the > controlfile. If you execute the following statement: > (assuming that you have the privileges to do so) > > SQL> alter database backup controlfile to trace; > > And examine trace file created in \udump, > you will see the current maxloghistory parameter specified. > I believe that you need to re-create the controlfile to alter this > parameter, which means that your existing loc history will be cleared. Actually, it's resetlogs that does the cleanup. Recreating the controlfiles can do this. But you can also use other methods to reset logs, such as a fake incomplete recovery (recover database until cancel). Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: What is a cursor EXACTLY?(internals question) Date: 2003-01-30 15:43:43 PST "Ryan" wrote in message news:... > "Norman Dunbar" wrote in message > > Cursor : CURrent Set Of Records. (at least it was in IDMS(X) > > hierarchical databases that I used to work with !) ... > thanks, but any idea what type of data structure they use to store the > cursor? Is it on the heap, on the stack? Array, linked list? Hi, Ryan, Norman explained what a cursor is in PL/SQL programming. But generally, a cursor is just a SQL statement, possibly modified by the SQL engine from what you originally submitted. A cursor can be a library cache object, which has associated attributes like hash value, address in memory, versions, optimizer goal, parse tree... viewable from v$sql or v$sqlarea, and attributes in session's private memory such as bind information. My understanding is that a single cursor is just a string, not a structure, unless you lump its attributes into this concept. Some books talk about cursor heads and bodies (J. Morle, Scaling Oracle8i,p.277). I think they're just different names for parent cursors, child cursors, respectively, as in documentation. The management of cursors in library cache is by way of a number of data structures, such as linked lists. A cursor can also exist in a session's private memory, in cases like you have session_cached_cursor set to a positive number and you've used the cursor more than 3 times. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: DBMS_JOB.remove Date: 2003-02-11 15:55:29 PST sandijs wrote in message news:<2512749.1044967745@dbforums.com>... > Oracle thinks you are not supposed to and thus you can't do it with > dbms_job package. But there is an undocumented (and not supported) > package dbms_ijob which allows you as dba modify/remove any job. [message in response to how to remove a job owned by another user] Equally "not supposed to" is create procedure thatuser.tmp as begin execute immediate 'begin dbms_job.remove(thatjob#); end;'. Yong Huang Norman Dunbar wrote in message news:... > Morning Yong, > > that will only work if the user has CREATE_ANY_PROCEDURE and > EXECUTE_ANY_PROCEDURE granted. I know this because it is a major > security hole in a package we have here - for which I've raised a > serious bug and so far nothing has been fixed. > > Cheers, > Norman. > > ------------------------------------- > Norman Dunbar > Database/Unix administrator > Lynx Financial Systems Ltd. From: Yong Huang (yong321@yahoo.com) Subject: Re: DBMS_JOB.remove Date: 2003-02-12 07:00:56 PST Yeah, I know it needs those two privileges, so SYS,SYSTEM... can remove other people's jobs. (In fact, with the help of another undocumented package, even EXECUTE ANY PROCEDURE alone may allow anybody to remove others' jobs, as one of the experts well known here explained). That security hole may not be counted as a bug because of fundamental design. Oracle does things differently than OS in the sense that an Oracle user executes somebody else's code assuming the owner's identity by default, while on any OS a user executes somebody else's code with his own privilege by default. People designing UNIX are more always more security-conscious than Oracle. Setuid scripts or even binary programs are often frowned upon by security-sensitive people. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Copy Database from NT to W2K Date: 2002-12-26 15:39:45 PST giuseppe.didodo@bluewin.ch (Giuseppe) wrote in message news:... > Hi guys > > sorry for my delay to answer. > > We got it up, it finally works now ! We left the DB on the machine > with NT4 and tried to upgrade the machine to W2K. We had some troubles > with drivers for the RAID controller, but when we solved that, we > could upgrade the machine. Because the DB was already on 8.1.7.4, > everything was fine. > > I still don`t understand why it didn`t work, I did that a couple of > times and it always worked. Ok, 90% I did it running under UNIX, NT > just two or three times. > > Thanks again for your help! > merry christmas and happy new year and lot less DB problems... Hi, Giuseppe, There shouldn't be any problem directly moving datafiles between Windows NT and 2000. But I have yet to find official endorsement from either Oracle or Microsoft saying this is allowed. (See the last paragraph at http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&oe=UTF-8&selm=b3cb12d6.0206091644.76926b5f%40posting.google.com and the entire thread, which is mainly about the potential use of datafiles across OSes) Ever since my test on my NT and W2K dual boot computer using the same datafiles, I always pay close attention to technical papers and books on NTFS difference between NT and W2k. I still haven't found a definitive answer from an expert saying Oracle datafiles are safe to be directly moved from one to the other. So please post back here if you find any problem later on. The ultimate question is, should NT and W2K be considered as two OSes or just one but two versions? And generally, how different should two versions of one OS be to require exp/imp instead of directly copying Oracle datafiles? Or it's the filesystem, not OS, that matters? Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: rotating listener.log Date: 2002-12-16 09:54:57 PST "Peter van Rijn" wrote in message news:... > The alert.log is NOT being kept open. For every singles write to the file it > is opened before, and closed afterwards. If oracle doesn't find the > alert.log when trying to open it, it is created instead. Hi, Peter, alert_SID.log is kept open all the time by background processes (see below). No server process (shadow process) writes to this file. /app/oracle/admin/pdev/bdump>fuser alert_pdev.log alert_pdev.log: 5937o 5935o 5933o 5931o 5929o 5927o 5925o 5923o 5921o 5919o /app/oracle/admin/pdev/bdump>ps -fp 5937,5935,5933,5931,5929,5927,5925,5923,5921,5919 UID PID PPID C STIME TTY TIME CMD oracle 5933 1 0 Sep 29 ? 320:22 ora_snp1_pdev oracle 5923 1 0 Sep 29 ? 31:26 ora_lgwr_pdev oracle 5937 1 0 Sep 29 ? 517:32 ora_snp3_pdev oracle 5927 1 0 Sep 29 ? 2:12 ora_smon_pdev oracle 5931 1 0 Sep 29 ? 383:58 ora_snp0_pdev oracle 5929 1 0 Sep 29 ? 0:01 ora_reco_pdev oracle 5935 1 0 Sep 29 ? 440:20 ora_snp2_pdev oracle 5925 1 0 Sep 29 ? 11:06 ora_ckpt_pdev oracle 5921 1 0 Sep 29 ? 5:20 ora_dbw0_pdev oracle 5919 1 0 Sep 29 ? 0:01 ora_pmon_pdev Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: log file sync vs. log file parallel write ? Date: 2002-12-12 11:54:13 PST bchorng@yahoo.com (Bass Chorng) wrote in message news:... > I really can't tell the difference from the above statements [description of the two wait events in documentation--Yong's note]. > But look at my v$session_event of a session, these 2 are way > different: > > EVENT TOTAL_WAITS TIME_WAITED > ----------------------------------------------------------------------- > log file parallel write 1576423 20189 > log file sync 1779889 337294 > > So is log file parallel write part of log file sync ? If so, > what's the difference called ? How can it be so big ( from > the above statements, the difference would be the time > writing to redo log buffer which should be very fast ) ? It's hard to compare these two events because they're experienced by different sessions: log file sync by user sessions, log file parallel write (as well as log file single write) by background sessions specifically LGWR. Total waits and time waited could be more or less when you compare them (not always one is higher than the other). Remember that when user sessions are not doing anything, LGWR could still be busy writing, because log writing is not always triggered by session commit. Also, don't forget the wait by LGWR writing logfile headers, which is sequential. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Re : UTL_FILE Newsgroups: comp.databases.oracle.misc, comp.databases.oracle.server Date: 2002-11-30 13:30:40 PST damorgan wrote in message news:<3DE7B40E.BF2D2E82@exesolutions.com>... > No. You need to specify each directory individually. > > utl_file_dir = C:\dev\client > utl_file_dir = C:\dev\client\sub1 > utl_file_dir = C:\dev\client\sub2 > > Just stack up the utl_file_dir statements one after the other. That's what the documentation says. But if you put them on one line like this: utl_file_dir = c:\temp,d:\temp, it still works. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: alter table move... question (logging/nologging) Date: 2002-11-30 13:20:34 PST Jeremiah Wilton wrote in message news:... > It is not dangerous to move the table NOLOGGING, as long as you: > > - Rebuild all the indexes on the table I'd like to add that if the table has LOB columns, the LOB segments have to be moved as well: alter table mytable move tablespace newts lob (mylobcol) store as lobsegmentnamefromxxx_lobs (tablespace newts). But the lobindex doesn't need to be rebuilt since it's still valid. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Abstract DataType ?? Date: 2002-11-22 11:56:51 PST fhashmat@yahoo.com (Fahim) wrote in message news:<62f7c5ff.0211211233.5485ef18@posting.google.com>... > Hi, > I am in the middle of designing a report server (demoralized form of > production), and want a suggestion, I have a table which could grow > upto 2.5 to 3 million rows, Is it a good idea to make several abstract > data type, like Name_TY(First, Last, Mi) Bill_Add_TY(Add1,Add2), > Ship_Add_TY(Add1,Add2) etc in one table or create one table with all > of these columns prefix with main demog like NAME_FIRST, NAME_LAST, > MAIL_ADD_ADD1, SHIP_ADD_ADD1 etc. > > Which would be most efficient in performance, where would data > actually store in abstract type column, is it store out side the table > or in the table. if it store outside the table (like nested table) > then I think second option would be best for me. In terms of performance, I'm under the impression that any fancy, object-oriented data type causes a drop instead of boost in performance. It may make developer's code look prettier, if not cleaner. In this newsgroup, Jonathan Lewis may be the best person to answer the storage question. Or you can read his book. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Log file sequential read ?? Date: 2002-11-22 10:04:22 PST srivenu@hotmail.com (srivenu) wrote in message news:<1a68177.0211220054.699c89d0@posting.google.com>... > I dont remember where and by whom but i read an article long time back > that the LGWR --often-- needs to read an entire log block from disk > before it tries to write a partly modified log buffer block to the log > file. > Lets say the OS block size (Which is also the log file block size) is > 512 bytes. > Then if LGWR tries to write redo shorter than that, it reads the > entire block and writes it back with the modified data. > I too am not entirely clear why. The article was written either by > Jonathan Lewis or Steve Adams or Tom Kyte. > regards > Srivenu When the filesystem needs to write to a disk block a chunk of data smaller than the filesystem block size (not the hard disk sector size which is 512 bytes), it has to read the block from disk and merge (append in case of lgwr) with the new data and write the entire block back to disk. Note that the OS block size is kind of an ambiguous term. I think it should only be used to refer to the filesystem block size, which is 8k for UFS on sun4u architecture and 4k on most other UNIX filesystems. This is one reason why redo logs are better placed on raw disks, so the low-level disk driver bypasses filesystem and writes in 512 byte chunks, the same as log block size in most cases (`dbfsize myredo.log`). Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Log file sequential read ?? Date: 2002-11-22 10:22:43 PST spamdump@nospam.noway.nohow (Ed Stevens) wrote in message news:<3dda63b9.57518006@ausnews.austin.ibm.com>... > Platform: Oracle 8.1.7.4 EE on Win2K > > A statspack report shows most of our wait event time (39.5 %) for 'log file > sequential read'. A search of MetaLink turned up lots of hits for Rdb. The few > other hits didn't really address this event, just had it mentioned in a list of > events. Pretty much the same for a search of this ng's archives. OraPerf > simply says 'The logfile header block was read. This could happen for example > when a new log file is added or when a logfile is dumped.' > > So, with this event fingered as my 'heavy hitter', what course of action would > you recommend? I've never seen significant wait time on this event. In addition to adding or dumping logfiles, this event obviously occurs when you switch logs (v$system_event.total_waits should increment by the number of log members when you switch logs). If you don't use RAID or any disk management hardware/software, make sure your redo log files in one group are on separate disks. Make sure you don't switch logs too frequently. Create no more than 3 members per group. If you do have hardware mirrors, I suggest you only use one member per group. Some people may not like this but I've never had problems since I started to do this on our highly active Oracle Portal database almost 2 years ago. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Oracle packages Date: 2002-11-21 11:50:58 PST odeborne@hotmail.com (Olivier Deborne) wrote in message news:... > SELECT level, > PART_NO, > (SELECT DESCRIPTION FROM INVENTORY_PART WHERE PART_NO = > RECIPE_STRUCTURE.PART_NO) PART_DESCRIPTION, > LINE_SEQUENCE, ... > It compiles just fine on my server (ora 90) but on another server > which runs Oracle 8.0.1.7 I get the following error: > > PLS-00103: Encountered the symbol "SELECT" when expecting one of the > following: > > ( - + mod not null others > avg > count c I don't have 8.0.1.7 around. But 8.0.5 doesn't seem to support the expression in the SELECT clause. Try this on your 9i and 8.0 database to find out: select username, (select user_id from user_users) userid from user_users; Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: TEMP tablespace problem Date: 2002-11-07 12:19:27 PST ni4ni@hotmail.com (Guy) wrote in message news:... > Thank you all, I found what the problem was. > > Apperently there is a bug in Oracle 8.17: when you define a temporary > tablespace and set it to no-logging mode it doesnt clean ! > > Ive checked it many times by running a large join in all possible > situations and that was the conclusion. Now I set it to log mode and > it doesnt grow anymore. On my 8.1.7.2 (recently changed to 8.1.7.3), I changed my datafile-based temporary tablespace to nologging a long time ago. Never had your problem. But maybe we never run any "large joins in all possible situations". Can you find any reference on Metalink about the bug? Personally I don't think there's any effect changing between nologging and logging for a datafile-based temporary tablespace. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Schema Size Date: 2002-11-07 11:52:47 PST ermin@dzinic.com (Ermin) wrote in message news:... > Hi, > How can I find out the size of the current schema? Usually this question is asked about the disk space currently used by a specific user. A simple answer is adding up all BYTES from DBA_SEGMENTS where OWNER = 'THATUSER'. But more accurately, you may also add various SIZEs in DBA_OBJECT_SIZE for that user. This may be insignificant but it's good to know it exists. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: "X commits per minute": is there an absolute threshold ? Date: 2002-11-06 15:37:41 PST yong321@yahoo.com (Yong Huang) wrote in message news:... > groups commits together". It's true LGWR waits 10 centiseconds after > it's posted by the first commit, to see if there's another commit > request coming in. If yes, they're combined into one. But I don't Jonathan Lewis emailed me saying it doesn't sound likely that's the case. The reason is that if that were true, each session would have a lot of log file sync wait time. I think Jonathan is right. I read that from Metalink Note:194107.999. Some Oracle kernel developer is saying that. But I think a simple test like this SQL> set serverout on SQL> declare 2 t1 number; 3 t2 number; 4 begin 5 select hsecs into t1 from v$timer; 6 dbms_output.put_line(t1); 7 execute immediate 'grant select on q to public'; 8 select hsecs into t2 from v$timer; 9 dbms_output.put_line(t2); 10 end; 11 / 583316080 583316082 PL/SQL procedure successfully completed. SQL> / 583316408 583316409 PL/SQL procedure successfully completed. SQL> / 583316519 583316521 PL/SQL procedure successfully completed. can prove that wrong. (Here I have a table Q. GRANT is a DDL which commits) Also, tracing lgwr process (truss -t pwrite -p [pid] on Solaris) while I press / in sqlplus seems to show lgwr writing happens immediately; 10 centiseconds would be a noticeable delay, I think. As Jonathan pointed out, a group commit occurs when LGWR is writing while other sessions commit. Those commits will be combined into one log write. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: How to figure out how many IOs a SQL costs ? Date: 2002-10-17 14:55:06 PST bchorng@yahoo.com (Bass Chorng) wrote in message news:... > I don't see that Oracle has # of IOs as a statistic, but this seems to > be a common question. If I am right, disk in tkprof or disk_reads in > v$sql all refer to # of blocks. I have yet to find a direct way to > tell # of IOs incurred. > > The closet I can think of is, suppose you can control the environment, > the difference of SUM(PHYRDS+PHYWRTS) in v$filestat before and after a > SQL run, plus the value of "redo writes" in v$sesstat. Why not trace the server process? If on Solaris, do this: $ truss -c -t read,write,pread,pwrite -p 19272 ^Csyscall seconds calls errors read .00 4 write .00 3 pread64 .00 4 ---- --- --- sys totals: .00 11 0 usr time: .00 elapsed: 7.79 The calls column is what you want. Here the pid 19272 comes from select spid from v$process where addr = (select paddr from v$session where sid = [thesessionid]). ^C is pressed when you're done with your SQL. For other OSes, find the appropriate system call trace command. Usually strace for BSD UNIX. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: System state dump Date: 2002-10-08 08:24:34 PST Other than looking at x$kgllk, you can also simply find the object (x$kglob.kglhdadr = v$session_wait.p1raw where event is library cache lock or pin). Find the SQLs in v$sql where users_executing > 0 and lower(sql_text) like '%thatobject%'. Find the sessions in v$session where sql_address = v$sql.address for the rows just found. The last step can also be done with v$open_cursor. But you have to use your judgement all along. E.g, not necessarily all rows selected from v$sql are relevant. This doesn't work for row cache lock wait. Incidentally, using locally managed tablespace is said to reduce row cache locks. Yong Huang mark@bobak.net (Mark J. Bobak) wrote in message news:... > Hi Chuck, > > Oracle has actually published this info. See MetaLink Doc. ID 122793.1. > It outlines two techniques: One utilizing systemstate dump analysis and > one using X$KGLLK. > > Hope that helps, > > -Mark > > "Chuck" wrote in message news:... > > -----BEGIN PGP SIGNED MESSAGE----- > > Hash: SHA1 > > > > Is there any information available on how to interpret the results of > > a system state dump? Specifically I want to know who is holding and > > waiting on library cache locks, library cache pins, and row cache > > locks. I have had an ongoing problem with materialized view refreshes > > hanging, and when I query v$session_wait I find dozens of sessions > > waiting on these events for long periods of time. OTS has been able ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Library Cache Pin problems Date: 2002-09-20 12:29:48 PST eckard.biskup@gmx.de (ecki) wrote in message news:<60600e9b.0209190413.6c1daca0@posting.google.com>... > Hi all, > > I got some problems with Oracle and Library Cache pin's. > > Situation: > ========== > Oracle version: 8.1.7.4 > OS: Solaris 8 > Hardware: Sun Fire 15000 > Total size of database: 2,5 TB > Involved Application: Specific ERP-Application with Oracle Backend > > Sometimes the database got in trouble with library cache pin's (93% > within Stats Pack). All transactions which were running at this time > ran very slow. The reason for that situation is likely unclear. But my > question is, even if the ERP-application generates the problem, what > can be done to prevent this situation in general? In addition to others' suggestion, you may also want to periodically look at v$session_wait and catch some 'library cache pin' wait events. When you see one, match P1 (handle address) with x$kglob.KGLHDADR to see what object it is. If the object is constantly a particular one, see if you can optimize it or report it to Oracle Support and ask why. Also, make sure the object is not being compiled when it's used. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: number of extents question Date: 2002-09-12 09:30:16 PST "Niall Litchfield" wrote in message news:<3d7f5b6a$0$1294$cc9e4d1f@news.dial.pipex.com>... > when appropriate. LOB's on the other hand are a big advance over longs. If > you don't use longs then fair play. if you do use longs then LOB's perform > better are easier to code for and can be accessed easily through sql and > PL/SQL. Longs and especially LONG RAW have had their day. Hi, Niall, Metalink notes 74658.996 and 165338.995 seem to say LONG actually is faster than CLOB and LONG RAW faster than BLOB. Benefits of LOB over LONG (RAW) are still obvious though. And LOB performance issues may have improved over the past 1 or 2 years. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: i/o waits by session? Date: 2002-09-11 15:40:36 PST "Chuck" wrote in message news:... > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > I know by querying V$SESSION_EVENT that a particular SID has > experienced very high i/o waits (up to 2 minutes per i/o!). Is there > a way to break down the i/o waits for that SID by file or filesystem? > V$SESSION_WAIT will only show me the current waits at a given point > in time. I need to see the i/o by file or filesystem for the entire > session, or at least over a given period of time for that one > session. Hi, Chuck, You can periodically sample v$session_wait where event like '%read%' or event like '%write%' and record all file# and block# somewhere. I can't think of an easier way for now. If you don't want to lose any I/O statistics and so sampling is not acceptable, and performance impact due to tracing is not a big concern, then you can trace the shadow process on OS. What OS is it? If Solaris, try truss -t read,write,pread,pwrite,readv,writev -p [shadow pid] > yourlogfile 2>&1 In another window, find out all files corresponding to the file descriptors for this process lsof -p [pid] -a -d [fd1,fd2,fd3...] where fds are the first argument of those read or write calls. I'll think about this problem a little more. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: reorganize tablespace Date: 2002-09-11 14:54:44 PST dbazhong@yahoo.com (xing zhang) wrote in message news:<1c47148a.0209080829.7ce5fe47@posting.google.com>... > I need to reorganize 11 tablespaces. What are the procedures to > reorganize entire tablespace and any scripts are available? Quest has a tool called Space Manager. My boss likes it a lot. For pre-8i databases, it's a very useful tool. With 8i or newer, I usually ALTER TABLE table MOVE TABLESPACE itstablespace and do the same to indexes (change MOVE to REBUILD). Use SQL to generate SQLs to do that. By doing this, I don't need to acquire the skill of using Space Manager! Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Why are people so afraid of underscore parameters ? Date: 2002-08-25 16:21:11 PST Thomas Kyte wrote in message news:... > (in fact, I can show you a truly big problem with _trace_files_public, security > and another undocumented but seemingly innocent event that can be set at the > session level -- just need _trace_files_public to be set and ALTER SESSION privs > and I can get some pretty neat information) Hi, Tom, Out of curiosity, what event is that, suppose the user has alter session privilege? By the way, I don't always grant alter session to all users. But I always grant select_catalog_role to whoever asks. Without alter session but with _trace_files_public set to true, all trace files are world-readable. Is that a problem? I would say, ideally, developers can use a "read only" account on the production box to make their development easier. That account only has create session privilege and select_catalog_role, plus some select on XXX table privileges. With _trace_files_public being true, they can also see what errors the database generates (as well as trace files DBAs manually create). I'm willing to open rather than close the database as much as possible, just as UNIX opens /var/adm/messages and most files under /etc world-readable. If security is really a concern, don't even allow SQL connection to the database, just as you don't allow UNIX shell access to a production UNIX box. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Why are people so afraid of underscore parameters ? Date: 2002-08-26 07:30:25 PST Thomas Kyte wrote in message news:... > In article , yong321@yahoo.com > says... > > > >Thomas Kyte wrote in message > >news:... > >>(in fact, I can show you a truly big problem with _trace_files_public, security > >> and another undocumented but seemingly innocent event that can be set at the > >>session level -- just need _trace_files_public to be set and ALTER SESSION privs > >> and I can get some pretty neat information) > > > >Hi, Tom, > > > >Out of curiosity, what event is that, suppose the user has alter > >session privilege? > > curiosity kills cats. I like cats. Killing a cat this way may not be that easy. I find that in Oracle 7 but not beyond, you can use alter session to set blockdump event: alter session set events = 'immediate trace name blockdump level [level]' where [level] is the return value of the function dbms_utility.make_data_block_address ("documented" in Rama Velpuri's book). So people knowing how to interpret block dumps knows the values in the table even though he can't select on the table from inside the database. But the difficulty with this security breach is that there's no select_catalog_role in Oracle7, and the file number and block number needed for dbms_utility.make_data_block_address is only available in dba_extents, not user_extents. So the user has to be granted select on dba_extents by SYS. BTW, if the user can read block dumps, he's close to being able to read the datafile directly anyway. So the datafile permission has to be such that others (world) can't read to stop these "hackers". Not all databases have their datafile permission set that way. I agree, there're more much easier exploits. Granting CONNECT role instead of CREATE SESSION is a bad one. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Why are people so afraid of underscore parameters ? Date: 2002-08-30 14:35:35 PST kazimiej@bms.com (Joe Kazimierczyk) wrote in message news:... > Even worse: with alter session and readable trace files, > in Oracle 7, 8i, and up, it's possible to find user's passwords > in plain text. This clever trick can be found in the white paper > "exploiting and protecting oracle" at www.pentest-limited.com > > All the more reason to be cautious when using undocumented features. Hi, Joe, I think you're referring to http://www.pentest-limited.com/utl_file.htm. Now I know what event Tom Kyte was talking about. I tested it. Even library cache dump at level 3, "alter user..identified by.." is shown, although v$sql only shows the first 20 characters such as "alter user yong iden". So if a user can alter session and see trace, the only workaround is for the DBA to flush shared pool right after changing a user's password. I wish some events had to be set with a command other than alter session. Allowing alter session to dump library cache is like allowing setenv or stty commands to run crash(1M) in Solaris (crash is used to "examine system image"). Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: dba_objects timestamp vs last_ddl_time Date: 2002-08-28 11:57:28 PST nicolaou@netinfo.com.cy (Christos) wrote in message news:<1f2d0d84.0208280610.57aeb598@posting.google.com>... > I am using the table 'dba_objects' to monitor changes on tables and > even though I checked the Oracle Documentation I could not understand > the difference of columns 'timestamp' and 'last_ddl_time' in the above > table. Hi, Christos, If you try a DDL that does *not* change the table specification, such as GRANT SELECT ON yourtable TO SYSTEM, LAST_DDL_TIME will be current and TIMESTAMP remains the same. For a view or other dependent objects, a DDL like ALTER COMPILE would show the difference for these two columns. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: my max db_file_multiblock_read_count is 128.. why? Date: 2002-08-27 13:18:54 PST andreyNSPAM@bookexchange.net (NetComrade) wrote in message news:<3d6a6e60.1484346567@news.globix.com>... > [lroot/dragon@: ]echo 'vol_maxio /D' | adb -k > physmem 1390ee > vol_maxio: > vol_maxio: 512 > (256K) > > [lroot/dragon@: ]echo 'maxphys /D' | adb -k > physmem 1390ee > maxphys: > maxphys: 131072 > (128K) > > Oracle Block size=4K > > So I'd expect my max multi_block_read to be 64 (or even 32, not 100% > sure if maxphys has anything to do w/ vxfs) > > however > multiblock_read_test.sql (from Steve Adams/ixora.com.au) > shows that it's 128 > > Are oracle reported scattered reads not actual physical reads? > > Any extra insight is appreciated. > ....... > We use Oracle 8.1.7.3 on Solaris 2.7 boxes Hi, NetComrade, You asked a very interesting question. I don't know the answer for sure. But I suspect that Oracle only knows down to the logical I/O level, i.e. the file system level, I/O size. In your case, your vxfs I/O size is 512 units or 256K (unit of vol_maxio is 512-byte sectors according to Veritas manual). According to http://docs.sun.com/?q=tuneable&p=/doc/816-0607/6m735r5ev&a=view, maxphys is the "Maximum size of physical I/O requests... File systems can and do impose their own limit". (In your case, the filesystem limit is larger rather than smaller) I suspect that if you do a benchmark, you may find that setting db_file_multiblock_read_count over 32 (thus I/O chunk size is 128K) improves performance only very little or not at all. If it does still improve slowly, there may be some benefit of vxfs queuing a big chunk even if the device serializes the processing. I know and I tested on UFS, setting this parameter does not improve table scan performance, explained by Steve Adams. I'm anxious to see your benchmark on vxfs. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Reduce Solaris context switches possible? Date: 2002-08-19 15:42:23 PST alainrtv@yahoo.com (Alan) wrote in message news:... > Hi, > > I'm looking for an enhancement I may make to my Solaris 2.8 OS. > > According to some guy I met at a user group, > I may lengthen the slice of time given to each process currently > resident in the run queue. > > Maybe it is some bizarre urban legend type thing (or maybe an > enhancement to something like Solaris 2.3). > > Have any of you heard of this Solaris enhancement? > > The idea behind this enhancement is that it improves the performance > of some database related processes like the Oracle kernel. > > I assume the performance boost comes as a result of fewer context > switches per second. Hi, Alan, I noticed you cross-posted this to two newsgroups. Oracle does not recommend you change oracle process priority. However, Steve Adams (http://www.ixora.com.au/tips/mystery.doc) did some experiment and claimed that "Despite warnings to the contrary, it does help to raise the operating system priority of both the LGWR and DBWR processes". I hope this adds another voice to this discussion. Note that he says raise the process priority (priocntl as well as renice comes to my mind), not modify dispatch table. It's unlikely performance improvement results from less context switches. It may well come from longer on-cpu time and less yield(2) calls. You may want to truss LGWR and DBWR for a few minutes and see whether you can gain by altering their priority. As others said, if disk I/O is your bottleneck, this won't help. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: transaction enqueue lock Date: 2002-08-15 14:28:57 PST jon.m.landenburger@verizon.com (Jon Landenburer) wrote in message news:... > We have a process which moves charges from one account to another. To > do this it updates the charge on the losing account (giving it an end > date) and then inserts the charge to the gaining account. Many cases > though the movement of an account may have thousands of charges. In > order to increase throughput we have attempted to run many of these > processes at the same time. There should be no row contention because > each process is working with different accounts and charges. When we > get above 4 processes running we encounter transaction enqueue locks > (TX). The holder of the lock is just going on updating and inserting > where as the blocked process is stopped on the first insert -- > waiting. There is no sharing here the blocked processes will not move > on until the blockers have completed. Effectively we are being held > to 4 of these concurrent processes. > > looking in v$rollstat there is almost no movement (inserts have no > before image and the updates are just a date field long-- so no > rollback contention. > > v$session_wait is where I see the enqueue > > Not sure where to go from here. > We are confident there is no row lock, no tables are being held > exclusively In addition to other people's advice, you may also look at INITRANS on the indexes for the table. Consider setting INITRANS to the same for their base table plus 1, after you increase the setting for the tables. For 8i, the quickest way to increase INITRANS is ALTER TABLE tablename MOVE TABLESPACE tablespacename. You don't need to specify another tablespace; i.e. the end result is not really moving the table to another tablespace. Remember to rebuild indexes after this, since they'll be UNUSABLE unless there's no row in the table. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Free disk space on Oracle volumes ? Date: 2002-07-24 15:28:15 PST Jarek Palka wrote in message news:<3D3EA969.9090702@interia.KILLSPAM.pl>... > U¿ytkownik Jarek Palka napisa³: > > Is it possible to determine how much space left on volumes where Oracle > > data_files reside ? > > Is there a PL/SQL package/procedure which returns amount of free server > > disk > > space (or server disk usage ) ? > > I'd like know free disk space, NOT FREE tablespace size. > > > > Of course I know how to check free space on OS level (Win,unix,Netware). > I'd like to develop OS platform independent and Oracle version independent > PL/SQL STORED PROCEDURE working in JOB and checking amount of free space left on > server disks. How about this? It's going to be inefficient even though the file is sparse: for i in 10 .. 100000 loop execute immediate 'create temporary tablespace aa tempfile '/theFS/aa.dbf' size ' || i || 'm;'; -- drop aa if no exception, otherwise report the free space left as the number used end loop; Even on Windows, sparse files are supported (for NT, SP4 I believe has to be applied; no problem for Windows2000). Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: LOCK_SGA Date: 2002-07-24 15:13:06 PST Sybrand Bakker wrote in message news:<8gntju4pj7k82vcjvkcsi66j468d2nrs9s@4ax.com>... > On Wed, 24 Jul 2002 09:14:59 -0400, "Harry Sheng" > wrote: > > >Hi, group > > > >Does the "LOCK_SGA" parameter (Oracle8.1.7) mean anything on Solaris8 ? > >How can I clear the ORA-27126 error when I set the "LOCK_SGA" to true in > >my oracle parameter file ? > > > > > Oracle on Solaris automatically uses I(ntimate)S(hared)M(emory). > Hence you don't need to lock the SGA Indeed he doesn't need lock_sga unless _use_ism is explicitly set to false. His problem is that "On Solaris the function calls that Oracle uses to fix the SGA in memory can only be done by root, but the Oracle executable is normally owned by Oracle" (Metalink note 121983.1). According to documentation about lock_sga, it's ignored if unsupported (which is on Solaris). So throwing ORA-27126 should be an Oracle bug, unless the documentation says otherwise. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Oracle and (Cached?) Views Date: 2002-07-22 21:12:17 PST "Richard Foote" wrote in message news:lOvW8.31132$Hj3.94303@newsfeeds.bigpond.com > Now that I've thought about it for a tad longer, I'm not convinced that > views are actually ever cached in the DD so there goes that idea !! Hi, Richard, View definition is not cached in dictionary cache. But it has to be in data dictionary. You're probably saying it's not cached in DC. Of course reading from DD and parsing a complex view takes a lot of CPU. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Data file size limit Date: 2002-07-22 15:00:31 PST "Sybrand Bakker" wrote in message > I remember seeing docs from Oracle stating on a 32-bit o/s the limit of a > file is 2G. I'm not sure why you contradict information in the docs. > > Regards Hi, Sybrand, The generic documentation (References -> Physical Database Limits) says: Database file size Maximum Operating system dependent. Limited by maximum operating system file size; typically 222 or 4M blocks. 32-bit OS does not necessarily limit OS filesize to 2GB or 2**31 (because you can use direct blocks, indirect blocks, double indirect and triple indirect blocks to contain file pointers). File systems can be created with largefiles option. mount(1) tells you which one has that enabled. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: any other oracle newsgroup? Date: 2002-07-22 14:46:00 PST Here's something I got by connecting to port 119 of news.verio.net some time ago: comp.databases.oracle.marketplace 66381 61026 y comp.databases.oracle.misc 28546 26895 y comp.databases.oracle.server 45045 42209 y comp.databases.oracle.tools 19380 18217 y The numbers roughly indicate how much traffic is on the group (the two numbers are total number of messages in this group and number of messages in this group carried by this news server; check RFC 977 and 2980 for details). Other than newsgroups, there're many mailing lists. The average age of mailing list subscribers is probably greater than that of newsgroup readers. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Help me tuning this wait event:log file sync Date: 2002-07-17 09:42:40 PST >> The original poster says he's using Veritas Quick I/O. We should >> assume data redundancy is already done at the lower level. I always >> propose 1 log member at the company I'm working at, because we either >> have Veritas or (on non-production servers) Solaris Disk Suite. Using >> 2 or more log members used to be prevalent. >... > waiting to happen if that's all you've got. LGWR makes an error writing to > your one member -and the hardware immediately replicates that corruption > onto your mirror(s). You've now got two identical copies of useless redo >... Writing log entries is parallelized (wait event: log file parallel write). Only log file headers are written sequentially. This means that the same log entries are written to all members of a log group. If the written data is wrong, it's wrong in all members. Think carefully when you say "LGWR makes an error". It can mean Oracle's error or the error of anything underneath (OS, RAID hardware). The underlined OS or hardware only replicates Oracle's error. Are you saying LGWR can write correct logs to one member but corrupt logs to another? That's not possible. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Suspend session? Date: 2002-06-17 15:05:57 PST "Chuck" wrote in message news:... > Is there a way to suspend a session without killing it in 8.1.7? Other than Jonathan's method [which is oradebug suspend/resume], you can also use any OS level debugger to attach to the shadow process in order to suspend the process, therefore the session. For gdb, use the command attach . When you want to resume, type detach . Steve Adams has a note on the oradebug suspend command at http://www.ixora.com.au/tips/admin/suspend.htm Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: show parameter sort_area ? Date: 2002-06-12 10:34:46 PST rgaffuri@cox.net (Ryan Gaffuri) wrote in message news:<1efdad5b.0205291132.a62c5fc@posting.google.com>... > I am reading Tom Kyte's book and I see in one of his examples he does > the following > show parameter sort_area and gets some information back. > > I get an error saying that this is an unknown command for both > parameter and sort area. Im on 8.1.7 Im assuming I have to set > something up to do use this? I duplicated the problem and found the solution. You must have logged in as a user other than oracle. If that's the case, first, make sure your ORACLE_HOME and LD_LIBRARY_PATH are correct (the latter should contain $ORACLE_HOME/lib). Then you have to login through SQL*Net. I.e. user@ORCL, not just user (followed by password). If you login as user not user@ORCL, then you get that error on "show parameter". Tom must have logged in as oracle (owner of Oracle software files). Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Can wrap encode plain text? Date: 2002-05-23 14:29:20 PST I want to hide plain text in my PL/SQL code. If it's in dbms_out.put[_line] or htp.print, the text won't be "wrapped"; instead there's just a "1" in front of the line. The following test is done on 9i and 8.1.7.2: $ cat q.sql create or replace procedure p is begin dbms_output.put('This is trade secret!'); end; / $ wrap iname=q.sql PL/SQL Wrapper: Release 9.0.1.0.0- Production on Thu May 23 16:20:55 2002 Copyright (c) Oracle Corporation 1993, 2001. All Rights Reserved. Processing q.sql to q.plb $ grep "trade secret" q.plb 1This is trade secret!: Is there a way to encode that text? Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Finding users who have dynamically set SORT_AREA_SIZE Date: 2002-05-08 15:51:07 PST "Jonathan Lewis" wrote in message news:<1020865132.12638.0.nnrp-07.9e984b29@news.demon.co.uk>... > Obviously you could do a use oradebug, > attach to another session and dump its > global_area, but I don't think that's what > you would consider an appropriate solution. > > -- > Jonathan Lewis > http://www.jlcomp.demon.co.uk Thanks for Jonathan's great idea. The way to do it is: Find the shadow process pid for the session. Go to svrmgrl (or sqlplus in new version). oradebug setospid --You can use other ways to set its pid oradebug dump global_area 14 exit Look at the trace file in udump. The following line stpdef stsustp_p [1A39218, 1A39240) = 000F4240 00010000 00000002 0000001B ... says the sort_area_size of that session is F4240 bytes (i.e. 1000000 bytes). [The above test is from 8.1.7.2] There may be other dump types that return this information. Yong Huang yong321@yahoo.com ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Finding users who have dynamically set SORT_AREA_SIZE Date: 2002-05-09 10:21:50 PST "Jonathan Lewis" wrote in message news:<1020899946.2581.0.nnrp-08.9e984b29@news.demon.co.uk>... > Was the 14 a slip of the fingers. 4 - for UGA is > what you want, 14 will include the fixed SGA and > could be a bit large. > > BTW - the 00010000 following the sort_area_size > is the sort_area_retained_size. > > -- > Jonathan Lewis > http://www.jlcomp.demon.co.uk I wanted to try 12. But I accidentally typed 14, which still gave me what I want. I think the levels usually are 2, 4, 8 and 12. I wonder what the two numbers following "stpdef" and "stsustp_p" mean (stpdef is not even in `nm $ORACLE/bin/oracle`)? And why does the bracket close with parenthesis? Maybe an address range inclusive at the start but exclusive at the end? Oracle "hackers" such as Steve Adams or K Gopal may be more interested in those things! Yong Huang ******************************************************************************** From: Jonathan Lewis (jonathan@jlcomp.demon.co.uk) Subject: Re: Finding users who have dynamically set SORT_AREA_SIZE Date: 2002-05-09 11:31:27 PST For heap and memory dumps the typical values are: 1 PGA 2 SGA 4 UGA 8 CGA (C for call) 16 and 32 also have an effect in 9i, IIRC 32 relates to the large pool but I can't remember what 16 did. Your are correct about the 'clopen' notation. I've not come across nm before - perhaps the omission is because the dump is from a heap and not from the static data segment -- but that's a conjecture based on ignorance of the technology. -- Jonathan Lewis http://www.jlcomp.demon.co.uk ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Bug in dba_free_space? Date: 2002-05-07 10:25:01 PST "Chuck" wrote in message news:... > Just added 64k to that datafile and an additional 100m magically appeared in > dba_free_space. Don't know where I picked up the idea that you could only > add that 64k at file creation time but it's obviously wrong. Thanks for the > help Jonathan. I think you got the impression from the statement that an Oracle extent has to be contiguous. But the word contiguous is as far as Oracle is concerned, i.e. datafile blocks are contiguous. As far as file system or OS is concerned, that's not the case. File allocation blocks (of size 512 bytes) could be anywhere. On UFS, the only tool I know that can tell you where each block is is Richard McDougal's filestat program (http://206.231.101.22/si/downloads/filestat). # filestat system01.dbf Inodes per cyl group: 6208 Inodes per block: 64 Cylinder Group no: 52 Cylinder Group blk: 5395712 File System Block Size: 8192 Device block size: 512 Number of device blocks: 665968 Start Block End Block Length (Device Blocks) ----------- ----------- ---------------------- 5391520 -> 5391551 32 5391808 -> 5391839 32 5393328 -> 5393455 128 6146240 -> 6148239 2000 6150928 -> 6152127 1200 6153360 -> 6162271 8912 6530848 -> 6530879 32 ... Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: How to find the sql associated with a transaction? Date: 2002-05-02 11:00:49 PST dba_222@yahoo.com (Roger Redford) wrote in message news:... > Recently, we had some problems with a distributed transaction. > > I can see if there are transactions in v$transaction. The > question was, what is the SQL for that transction? > > To test, I did I simple insert, but didn't commit. > > Select count(1) > from V$TRANSACTION 1 > > I've managed to join V$TRANSACTION with: > > V$SESSION: > > Select count(*) > FROM V$TRANSACTION tx, > V$session sess > where tx.ses_ADDR = sess.sADDR 1 > > But, when I've joined to the views: > V$SQL, V$SQLTEXT, V$SQLAREA, I've got 0 rows. > > Select count(1) > FROM V$TRANSACTION tx, > V$SQL sql > where tx.ADDR = sql.ADDRESS 0 You're almost there. You're confusing a transaction state object with a SQL cursor. Their addresses are of course different. (In fact you run the very small risk of accidentally matching them!) v$sql(text or area) address can match v$session.sql_address. That's the most commonly used. Other views such as v$open_cursor has an address column that can also match v$sqlXXX.address. If you write a script, make sure you also include the hash value columns. Doing it manually doesn't need this because you can always tell if the query returns two SQLs with the same address but different hash values. Actually I've never seen this happen to me. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: PMON Date: 2002-04-21 14:03:55 PST i_rahim@hotmail.com (Imran Rahim) wrote in message news:... > Can someone please tell me how often PMON runs or how I can determine > this? I have SNIPED sessions that I want eradicating and I hear that PMON > should be doing this when it wakes up and does its job It's shown as ACTIVE in v$session all the time. At the OS level, it wakes up every 3 seconds (run OS process tracing on it; Solaris has truss, others strace). Other times it goes to sleep. You can explicitly wake it up in svrmgrl oradebug. But since it wakes up so frequently, there's rarely such need. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Latch spin - OS timeslice Date: 2002-03-27 21:59:19 PST alexbrds@yahoo.com (Alex Bardos) wrote in message news:... > time slice. I would expect the whole SPIN cycle to be comparable to > the time slice, still smaller though to increase chances of not being > preempted by the OS after the time slice is up. Take my Solaris 2.6 on sun4u 8 CPUs as an example. mpstat tells me the number of spins on mutexes (smtx column) per second on one processor is way below 100. Maybe this is at night. But I can't see a number higher than 100 no matter what I do in the database or through my Portal site which is notorious for library cache latch contention (my _spin_count is 2000). ps -cLp tells me the foregroup and background oracle processes are in the TS (time sharing) priority class, with PRI ranging from 8 to 46. dispadmin -c TS -g tells me that their time quantum ranges between 40 to 200 ms for these PRIORITY LEVELs. I would imagine that time slice is more than enough for 2000 spins. But I don't know why mpstat shows such a low number for smtx, probably because it's an average of all TS processes, most of which are not spinning on mutexes. Yong Huang ******************************************************************************** From: Yong Huang (yong321@yahoo.com) Subject: Re: Async I/O is NOT supppressed for filesystem datafiles in 8.1.5? Date: 2002-01-04 10:10:30 PST [Original message from El Toro (medawsonjr@yahoo.com) can be summarized as >> Later documentation from Sun, and Metalink DocID#214480.999, both >> specify that the 8i releases that will suppress AIO on filesystems >> are 8.1.5 and 8.1.6. It goes on to read that even if you set >> disk_asynch_io to 'true', async I/O STILL would NOT be used. >> So, imagine my surprise when I trussed my DBWR process, only to >> find it making libaio calls to my datafiles -- and my Oracle version >> is 8.1.5! This is throwing my team into a state of confusion, as ] yong321@yahoo.com (Yong Huang) wrote in message news:... > > Again, my own test on 8.1.7.2 on Solaris 2.6 shows that if > disk_asynch_io is true, kaio() is attempted by DBWR with ENOTSUP > returned, followed by lwp_cond_signal(), lwp_cond_wait() and > pwrite64(). If disk_asynch_io is false, pwrite64() is not preceded by > kaio() and the lwp_cond_XX calls. I didn't see a fstat(2) call in > truss. Maybe you used a slightly older version of Oracle or Solaris? > As Bob Sneed's message says, the logic is do kaio, if that fails, do > lwp-based aio. Solaris does not use ioctl (you said fstat) to test > whether it's raw prior to kaio, because that's too expensive. But the > failed kaio call (if failed), is very lightweight, and so I infer that > turning off aio (lwp-based) with disk_asynch_io in Oracle only sligtly > helps. I found an 8.1.6.0.0 database at my disposal. Interestingly, the result is different from 8.1.7.2.0. Regardless of the setting of disk_asynch_io, only pwrite64 call is seen in truss dbw0, not preceded by kaio or lwp-whatever. The file system is as usual for me, UFS. OS: Solaris 2.6, Patch Generic_105181-16, archetecture sun4u. I wonder if this version of Oracle confirms what their official article says. Yong Huang