DBMS Jobs * Once you submit or remove a job, commit, because procedures in dbms_job package don't auto-commit, unlike in most other PL/SQL packages. The non-autocommit feature of dbms_job is not an annoyance. It offers the transactional capability as in a trigger, where a job can be "un-scheduled" upon rollback. * Unlike a scheduler job (scheduled by dbms_scheduler), you must login as the dbms job's owner to remove or set broken a job (perhaps through proxy logon), or create a procedure in his schema to do it: create procedure jobowner.tmp as begin dbms_job.remove(&job); end; / exec jobowner.tmp drop procedure jobowner.tmp; If dbms_jobs.log_user differs from schema_user, create the procedure in schema_user's schema. The job log user may differ because the schema data and job were imported by him. See ./JobLogUser=ImpUser.txt * If your job interacts with OS and needs a network connection, the connection information may be cached in the job queue slave processes ora_j* (not their parent process, job queue coordinator, ora_cjq0_). For instance, if you change the DNS, the job slave processes will still use the old DNS record, which won't expire or time out within ora_j*. Verify with strace -p -e trace=connect while you submit and run a job. (strace, or truss on some UNIX'es, is the best way to troubleshoot a problem when an Oracle session interacts with OS, including problems with utl_smtp or utl_mail.) You can kill the sessions for the job slave processes: select 'alter system kill session ''' || sid || ',' || serial || '' immediate;' from v$session where program like '%j0%'; Keeping the job_queue_processes parameter as is (default 10), ora_j* processes will be recreated when a job runs. If you directly kill the ora_j* processes at OS level, that's fine too. (To determine if a background process is killable, on Linux, check its environment variable SKGP_HIDDEN_ARGS (with `ps eww ' or cat /proc//environ) and see if the first element is BG or FATAL. * In RAC, specify instance when submitting a job if you want to know which node runs the job. Or change it later with dbms_job.instance. Otherwise, the job could be run on any instance. * There're numerous bugs about memory leak of job queue coordinator, ora_cjq0_; v$process.pga_alloc_mem could be over 1 GB. It's a killable process; killing it won't crash instance. Reducing job_queue_processes from its ridiculous value 1000 (default maximum) has no effect. * If nobody in the team recalls setting up a job, it could be automatically set up by a refresh group. Check dba_rgroup. (If the job is removed, dba_rgroup.job will not be updated.) * Personally, I prefer a cron job to a dbms job. Cron jobs have a much longer history (40 years!), are more mature, and much easier to troubleshoot. People find it hard to troubleshoot partly because they like to schedule a cron job like ... script > /dev/null 2>&1. I always redirect stdout and stderr to something: ... script > /tmp/myjob.out 2>&1 even in the case where the script has its own logging. That /tmp/myjob.out can often capture valuable info for troubleshooting. If you don't like it, at least do this during troubleshooting period. In addition to stdout/stderr redirection, another convenience a cron job offers is the simple -x option of your shell, particularly when you hate the fact that a dbms job leaves no record or produces no trace file if Oracle *thinks* there's no error running the job.