Friday 31 January 2014

How to Use WGET with a Proxy Server

How to Use WGET with a Proxy Server

Question sent in from Brad in South Florida.

Q: My work has a mandatory proxy server.  I think this is what is stopping wget from fetching pages.  Is there a way to allow wget to go through our proxy server?

A: Yes, here is a simple how-to on getting wget to respect proxy settings. Just like everything else in Linux there is more than one way to make this happen.

Option 1:  Set the http_proxy environmental varible.

See this article to learn how to set, export and make the http_proxy variable global for a single user or all users:

How to Set the http_proxy Variable in Linux

Option 2: The preferred method is to configure wget to work with your proxy settings.

This option is best if you only need wget to work with the proxy from the command line, or you need wget to use a different proxy from the one set in the environment with option 1.  If you have multiple command line tools, most can be configured on their own or you can use option one.

You can edit the wget configuration file to tell it what proxy server to use for http, https and ftp connections. This file resides at /etc/wgetrc on most Red Hat, Fedora and CentOS systems.  I believe Ubuntu stores the file in the same location.

Open /etc/wgetrc in a text editor (vi, nano, gedit, etc..) and look for these lines which are commented out:

# You can set the default proxies for Wget to use for http, https, and ftp.
# They will override the value in the environment.
#https_proxy = http://proxy.yoyodyne.com:18023/
#http_proxy = http://proxy.yoyodyne.com:18023/
#ftp_proxy = http://proxy.yoyodyne.com:18023/
You will need to uncomment (remove the leading pound sign) these lines and add the correct information for your proxy like so:

# You can set the default proxies for Wget to use for http, https, and ftp.
# They will override the value in the environment.
https_proxy = http://proxy.yourdomain.com:8080/
http_proxy = http://proxy.yourdomain.com:8080/
ftp_proxy = http://proxy.yourdomain.com:8080/

Save the file and now wget will work on the command line through the proxy you just set.

If your proxy requires authentication, add the following lines to your wgetrc file:
http_proxy=http://proxy.yourdomain.com:8080/
proxy_user=user
proxy_password=password

Monday 27 January 2014

Database_role


select database_role from v$database;

Understanding Indexes

Of iPods and Indexes


I'm not really an "early-adopter" of technology. Don't get me wrong; I love it, I just don't want to feed the addiction. When I do get a new piece of technology though, it's like a fever; I can't concentrate on anything until I've read the manual from cover to cover and found out everything it can do, every built-in gizmo, and every trashy piece of after-market merchandise that can be plugged into it.
And I don't think I'm alone here. Working in I.T., there's no shortage of people who can peel off a half-hour litany on their new Blackberry/IPod/Notepad/Digital Watch within a day of purchase.
So why are databases different? I worked with Oracle databases for 5 years before I understood indexes - and it's right there in the manual (Concepts manual, for those interested). I don't mean a deep, spiritual, one-ness with indexes; I mean just a basic understanding of the mechanics of the things. I distinctly remember my first tuning methodology: "It's running slow. I think I'll index some of the columns and see if it improves." I should have copyrighted it because I've seen it used so many times in the last 10 years, I could've made a fortune in commissions.
If you understand how indexes work, 99 times out of a 100 you don't need the suck-it-and-see methodology because you know beforehand whether an index will help?

What is an Index?

This is covered in the Oracle Concepts manual, of course, but here's the Cliff Notes version.

Blocks

First you need to understand a block. A block - or page for Microsoft boffins - is the smallest unit of disk that Oracle will read or write. All data in Oracle - tables, indexes, clusters - is stored in blocks. The block size is configurable for any given database but is usually one of 4Kb, 8Kb, 16Kb, or 32Kb. Rows in a table are usually much smaller than this, so many rows will generally fit into a single block. So you never read "just one row"; you will always read the entire block and ignore the rows you don't need. Minimising this wastage is one of the fundamentals of Oracle Performance Tuning.
Oracle uses two different index architectures: b-Tree indexes and bitmap indexes. Cluster indexes, bitmap join indexes, function-based indexes, reverse key indexes and text indexes are all just variations on the two main types. b-Tree is the "normal" index, so we will come back to Bitmap indexes another time.

The "-Tree" in b-Tree

A b-Tree index is a data structure in the form of a tree - no surprises there - but it is a tree of database blocks, not rows. Imagine the leaf blocks of the index as the pages of a phone book.
  • Each page in the book (leaf block in the index) contains many entries, which consist of a name (indexed column value) and an address (ROWID) that tells you the physical location of the telephone (row in the table).
  • The names on each page are sorted, and the pages - when sorted correctly - contain a complete sorted list of every name and address
A sorted list in a phone book is fine for humans, beacuse we have mastered "the flick" - the ability to fan through the book looking for the page that will contain our targetwithout reading the entire page. When we flick through the phone book, we are just reading the first name on each page, which is usually in a larger font in the page header. Oracle cannot read a single name (row) and ignore the reset of the page (block); it needs to read the entire block.
If we had no thumbs, we may find it convenient to create a separate ordered list containing the first name on each page of the phone book along with the page number. This is how the branch-blocks of an index work; a reduced list that contains the first row of each block plus the address of that block. In a large phone book, this reduced list containing one entry per page will still cover many pages, so the process is repeated, creating the next level up in the index, and so on until we are left with a single page: the root of the tree.
To find the name Gallileo in this b-Tree phone book, we:
  • Read page 1. This tells us that page 6 starts with Fermat and that page 7 starts with Hawking.
  • Read page 6. This tells us that page 350 starts with Fyshe and that page 351 starts with Garibaldi.
  • Read page 350, which is a leaf block; we find Gallileo's address and phone number.
That's it; 3 blocks to find a specific row in a million row table. In reality, index blocks often fit 100 or more rows, so b-Trees are typically quite shallow. I have never seen an index with more than 5 levels. Curious? Try this:
SELECT index_name, blevel+1 FROM user_indexes ORDER BY 2;
user_indexes.blevel is the number of branch levels. Always add 1 to include the leaf level; this tells you the number of blocks a unique index scan must read to reach the leaf-block. If you're really, really, insatiably curious; try this in SQL*Plus:
ACCEPT index_name PROMPT "Index Name: "

ALTER SESSION SET TRACEFILE_IDENTIFIER = '&index_name';

COLUMN object_id NEW_VALUE object_id

SELECT object_id
FROM   user_objects
WHERE  object_type = 'INDEX'
AND    object_name = upper('&index_name');

ALTER SESSION SET EVENTS 'IMMEDIATE TRACE NAME TREEDUMP LEVEL &object_id';
ALTER SESSION SET TRACEFILE_IDENTIFIER = "";

SHOW PARAMETER user_dump_dest
Give the name of an index on a smallish table (because this will create a BIG file). Now, on the Oracle server, go to the directory shown by the final SHOW PARAMETER user_dump_dest command and find your trace file - the file name will contain your index name. Here is a sample:
*** 2007-01-31 11:51:26.822
----- begin tree dump
branch: 0x68066c8 109078216 (0: nrow: 325, level: 1)
   leaf: 0x68066c9 109078217 (-1: nrow: 694 rrow: 694)
   leaf: 0x68066ca 109078218 (0: nrow: 693 rrow: 693)
   leaf: 0x68066cb 109078219 (1: nrow: 693 rrow: 693)
   leaf: 0x68066cc 109078220 (2: nrow: 693 rrow: 693)
   leaf: 0x68066cd 109078221 (3: nrow: 693 rrow: 693)
   ...
   ...
   leaf: 0x68069cf 109078991 (320: nrow: 763 rrow: 763)
   leaf: 0x68069d0 109078992 (321: nrow: 761 rrow: 761)
   leaf: 0x68069d1 109078993 (322: nrow: 798 rrow: 798)
   leaf: 0x68069d2 109078994 (323: nrow: 807 rrow: 807)
----- end tree dump
This index has only a root branch with 323 leaf nodes. Each leaf node contains a variable number of index entries up to 807! A deeper index would be more interesting, but it would take a while to dump.

"B" is for...

Contrary to popular belief, b is not for binary; it's balanced.
As you insert new rows into the table, new rows are inserted into index leaf blocks. When a leaf block is full, another insert will cause the block to be split into two blocks, which means an entry for the new block must be added to the parent branch-block. If the branch-block is also full, it too is split. The process propagates back up the tree until the parent of split has space for one more entry, or the root is reached. A new root is created if the root node splits. Staggeringly, this process ensures that every branch will be the same length. Try it on paper for yourself!

How are Indexes used?

Indexes have three main uses:
  • To quickly find specific rows by avoiding a Full Table Scan
    We've already seen above how a Unique Scan works. Using the phone book metaphor, it's not hard to understand how a Range Scan works in much the same way to find all people named "Gallileo", or all of the names alphabetically between "Smith" and "Smythe". Range Scans can occur when we use >, <, LIKE, or BETWEEN in a WHERE clause. A range scan will find the first row in the range using the same technique as the Unique Scan, but will then keep reading the index up to the end of the range. It is OK if the range covers many blocks.
  • To avoid a table access altogether
    If all we wanted to do when looking up Gallileo in the phone book was to find his address or phone number, the job would be done. However if we wanted to know his date of birth, we'd have to phone and ask. This takes time. If it was something that we needed all the time, like an email address, we could save time by adding it to the phone book.
    Oracle does the same thing. If the information is in the index, then it doesn't bother to read the table. It is a reasonably common technique to add columns to an index, not because they will be used as part of the index scan, but because they save a table access. In fact, Oracle may even perform a Fast Full Scan of an index that it cannot use in a Range or Unique scan just to avoid a table access.
  • To avoid a sort
    This one is not so well known, largely because it is so poorly documented (and in many cases, unpredicatably implemented by the Optimizer as well). Oracle performs a sort for many reasons: ORDER BYGROUP BYDISTINCT, Set operations (eg. UNION), Sort-Merge Joins, uncorrelated IN-subqueries, Analytic Functions). If a sort operation requires rows in the same order as the index, then Oracle may read the table rows via the index. A sort operation is not necessary since the rows are returned in sorted order.
    Despite all of the instances listed above where a sort is performed, I have only seen three cases where a sort is actually avoided.
    1. GROUP BY
        1  select src_sys, sum(actl_expns_amt), count(*)
        2  from ef_actl_expns
        3  where src_sys = 'CDW'
        4  and actl_expns_amt > 0
        5* group by src_sys
      
      -------------------------------------------------------------
      | Id  | Operation                           | Name          |
      -------------------------------------------------------------
      |   0 | SELECT STATEMENT                    |               |
      |   1 |  SORT GROUP BY NOSORT               |               |
      |*  2 |   TABLE ACCESS BY GLOBAL INDEX ROWID| EF_ACTL_EXPNS |
      |*  3 |    INDEX RANGE SCAN                 | EF_AEXP_PK    |
      -------------------------------------------------------------
      
      Predicate Information (identified by operation id):
      ---------------------------------------------------
      
         2 - filter("ACTL_EXPNS_AMT">0)
         3 - access("SRC_SYS"='CDW')
      Note the NOSORT qualifier in Step 1.
    2. ORDER BY
        1  select *
        2  from ef_actl_expns
        3  where src_sys = 'CDW'
        4  and actl_expns_amt > 0
        5* order by src_sys
      
      ------------------------------------------------------------
      | Id  | Operation                          | Name          |
      ------------------------------------------------------------
      |   0 | SELECT STATEMENT                   |               |
      |*  1 |  TABLE ACCESS BY GLOBAL INDEX ROWID| EF_ACTL_EXPNS |
      |*  2 |   INDEX RANGE SCAN                 | EF_AEXP_PK    |
      ------------------------------------------------------------
      
      Predicate Information (identified by operation id):
      ---------------------------------------------------
      
         1 - filter("ACTL_EXPNS_AMT">0)
         2 - access("SRC_SYS"='CDW')
      
      Note that there is no SORT operation, despite the ORDER BY clause. Compare this to the following:
        1  select *
        2  from ef_actl_expns
        3  where src_sys = 'CDW'
        4  and actl_expns_amt > 0
        5* order by actl_expns_amt
      
      -------------------------------------------------------------
      | Id  | Operation                           | Name          |
      -------------------------------------------------------------
      |   0 | SELECT STATEMENT                    |               |
      |   1 |  SORT ORDER BY                      |               |
      |*  2 |   TABLE ACCESS BY GLOBAL INDEX ROWID| EF_ACTL_EXPNS |
      |*  3 |    INDEX RANGE SCAN                 | EF_AEXP_PK    |
      -------------------------------------------------------------
      
      Predicate Information (identified by operation id):
      ---------------------------------------------------
      
         2 - filter("ACTL_EXPNS_AMT">0)
         3 - access("SRC_SYS"='CDW')
    3. DISTINCT
        1  select distinct src_sys
        2  from ef_actl_expns
        3  where src_sys = 'CDW'
        4* and actl_expns_amt > 0
      
      -------------------------------------------------------------
      | Id  | Operation                           | Name          |
      -------------------------------------------------------------
      |   0 | SELECT STATEMENT                    |               |
      |   1 |  SORT UNIQUE NOSORT                 |               |
      |*  2 |   TABLE ACCESS BY GLOBAL INDEX ROWID| EF_ACTL_EXPNS |
      |*  3 |    INDEX RANGE SCAN                 | EF_AEXP_PK    |
      -------------------------------------------------------------
      
      Predicate Information (identified by operation id):
      ---------------------------------------------------
      
         2 - filter("ACTL_EXPNS_AMT">0)
         3 - access("SRC_SYS"='CDW')
      Again, note the NOSORT qualifier.
    This is an extraordinary tuning technique in OLTP systems like SQL*Forms that return one page of detail at a time to the screen. A SQL with a DISTINCT, GROUP BY, or ORDER BY that uses an index to sort can return just the first page of matching rows without having to fetch the entire result set for a sort. This can be the difference between sub-second response time and several minutes or hours.

    Everybody repeat after me: "Full table Scans are not bad"

    Up to now, we've seen how indexes can be good. It's not always the case; sometimes indexes are no help at all, or worse: they make a query slower.
    A b-Tree index will be no help at all in a reduced scan unless the WHERE clause compares indexed columns using >, <, LIKEIN, or BETWEEN operators. A b-Tree index cannot be used to scan for any NOT style operators: eg. !=NOT INNOT LIKE. There are lots of conditions, caveats, and complexities regarding joins, sub-queries, OR predicates, functions (inc. arithmetic and concatenation), and casting that are outside the scope of this article. Consult a good SQL tuning manual.
    Much more interesting - and important - are the cases where an index makes a SQL slower. These are particularly common in batch systems that process large quantities of data.
    To explain the problem, we need a new metaphor. Imagine a large deciduous tree in your front yard. It's Autumn, and it's your job to pick up all of the leaves on the lawn. Clearly, the fastest way to do this (without a rake, or a leaf-vac...) would be get down on hands and knees with a bag and work your way back and forth over the lawn, stuffing leaves in the bag as you go. This is a Full Table Scan, selecting rows in no particular order, except that they are nearest to hand. This metaphor works on a couple of levels: you would grab leaves in handfuls, not one by one. A Full Table Scan does the same thing: when a bock is read from disk, Oracle caches the next few blocks with the expectation that it will be asked for them very soon. Type this in SQL*Plus:
    SHOW PARAMETER db_file_multiblock_read_count
    
    Just to shake things up a bit (and to feed an undiagnosed obsessive compulsive disorder), you decide to pick up the leaves in order of size. In support of this endeavour, you take a digital photograph of the lawn, write an image analysis program to identify and measure every leaf, then load the results into a Virtual Reality headset that will highlight the smallest leaf left on the lawn. Ingenious, yes; but this is clearly going to take a lot longer than a full table scan because you cover much more distance walking from leaf to leaf.
    So obviously Full Table Scan is the faster way to pick up every leaf. But just as obvious is that the index (virtual reality headset) is the faster way to pick up just the smallest leaf, or even the 100 smallest leaves. As the number rises, we approach a break-even point; a number beyond which it is faster to just full table scan. This number varies depending on the table, the index, the database settings, the hardware, and the load on the server; generally it is somewhere between 1% and 10% of the table.
    The main reasons for this are:
    • As implied above, reading a table in indexed order means more movement for the disk head.
    • Oracle cannot read single rows. To read a row via an index, the entire block must be read with all but one row discarded. So an index scan of 100 rows would read 100 blocks, but a FTS might read 100 rows in a single block.
    • The db_file_multiblock_read_count setting described earlier means FTS requires fewer visits to the physical disk.
    • Even if none of these things was true, accessing the entire index and the entire table is still more IO than just accessing the table.
  • So what's the lesson here? Know your data! If your query needs 50% of the rows in the table to resolve your query, an index scan just won't help. Not only should you not bother creating or investigating the existence of an index, you should check to make sure Oracle is not already using an index. There are a number of ways to influence index usage; once again, consult a tuning manual. The exception to this rule - there's always one - is when all of the columns referenced in the SQL are contained in the index. If Oracle does not have to access the table then there is no break-even point; it is generally quicker to scan the index even for 100% of the rows.

    Summary

Thursday 23 January 2014

FORCE LOGGING Option

FORCE LOGGING Option
In Oracle9i release 2, the FORCE LOGGING option was introduced. The FORCE LOGGING option can be set at the database level or the tablespace level. The precedence is from database to tablespace. If a tablespace is created or altered to have FORCE LOGGING enabled, any change in that tablespace will go into the redo log and be usable for recovery.
Similarly, if a database is created or altered to have the FORCE LOGGING enabled, any change across the database, with exception of temporary segments and temporary tablespace, will be available in redo logs for recovery.
The FORCE LOGGING option can be set at database creation time or later using the alter database command.
To set FORCE LOGGING during the database creation, specify the following:
CREATE DATABASE <dbname>…..FORCE LOGGING… 
To enable FORCE LOGGING after the database is created, use the following command:
ALTER DATABASE FORCE LOGGING;
The FORCE LOGGING option is the safest method to ensure that all the changes made in the database will be captured and available for recovery in the redo logs. Force logging is the new feature added to the family of logging attributes.
Before the existence of FORCE LOGGING, Oracle provided logging and nologging options. These two options have higher precedence at the schema object level than the tablespace level; therefore, it was possible to override the logging settings at the tablespace level with nologging setting at schema object level.
FORCE LOGGING Option
The database or tablespaces in the database should be put into FORCE LOGGING mode before creating the backup for the standby database. Either a database or all of its tablespaces should be put into this mode but not both.
The following statement will put a tablespace in FORCE LOGGING mode:
ALTER TABLESPACE <tablespace name> FORCE LOGGING;
The FORCE LOGGING mode can be cancelled at the database level using the following statement:
ALTER DATABASE NO FORCE LOGGING;
The FORCE LOGGING mode can be cancelled at the tablespace level using the following statement:
ALTER TABLESPACE <tablespace name> NO FORCE LOGGING;
Temporary tablespaces and temporary segments have no effect during FORCE LOGGING mode because these objects do not generate any redo. Undo tablespaces are in FORCE LOGGING mode by default, so they cannot be put into FORCE LOGGING mode. Oracle will generate an error if an attempt is made to put a temporary tablespace or undo tablespace into FORCE LOGGING mode.
The FORCE_LOGGING column of v$database view can be queried to verify that the database is in FORCE LOGGING mode. Similarly, the FORCE_LOGGING column of dba_tablespaces view provides the same logging information for each tablespace.
select force_logging from v$database;
select force_logging from dba_tablespaces;
Also see these important notes on running DML in nologging mode:
Force logging mode is persistent across database startup, but it is not maintained when the control file is recreated unless the FORCE LOGGING clause is specified in the create controlfile statement. Also, a tablespace in the FORCE LOGGING mode, when transported to another database, does not maintain this mode.
In these situations, the FORCE LOGGING mode would have to be re-enabled. The primary database should remain in FORCE LOGGING mode as long as there is at least one Oracle instance in use.  Putting a database in FORCE LOGGING mode will have some performance impact.

FORCE LOGGING Option

FORCE LOGGING Option
In Oracle9i release 2, the FORCE LOGGING option was introduced. The FORCE LOGGING option can be set at the database level or the tablespace level. The precedence is from database to tablespace. If a tablespace is created or altered to have FORCE LOGGING enabled, any change in that tablespace will go into the redo log and be usable for recovery.
Similarly, if a database is created or altered to have the FORCE LOGGING enabled, any change across the database, with exception of temporary segments and temporary tablespace, will be available in redo logs for recovery.
The FORCE LOGGING option can be set at database creation time or later using the alter database command.
To set FORCE LOGGING during the database creation, specify the following:
CREATE DATABASE <dbname>…..FORCE LOGGING… 
To enable FORCE LOGGING after the database is created, use the following command:
ALTER DATABASE FORCE LOGGING;
The FORCE LOGGING option is the safest method to ensure that all the changes made in the database will be captured and available for recovery in the redo logs. Force logging is the new feature added to the family of logging attributes.
Before the existence of FORCE LOGGING, Oracle provided logging and nologging options. These two options have higher precedence at the schema object level than the tablespace level; therefore, it was possible to override the logging settings at the tablespace level with nologging setting at schema object level.
FORCE LOGGING Option
The database or tablespaces in the database should be put into FORCE LOGGING mode before creating the backup for the standby database. Either a database or all of its tablespaces should be put into this mode but not both.
The following statement will put a tablespace in FORCE LOGGING mode:
ALTER TABLESPACE <tablespace name> FORCE LOGGING;
The FORCE LOGGING mode can be cancelled at the database level using the following statement:
ALTER DATABASE NO FORCE LOGGING;
The FORCE LOGGING mode can be cancelled at the tablespace level using the following statement:
ALTER TABLESPACE <tablespace name> NO FORCE LOGGING;
Temporary tablespaces and temporary segments have no effect during FORCE LOGGING mode because these objects do not generate any redo. Undo tablespaces are in FORCE LOGGING mode by default, so they cannot be put into FORCE LOGGING mode. Oracle will generate an error if an attempt is made to put a temporary tablespace or undo tablespace into FORCE LOGGING mode.
The FORCE_LOGGING column of v$database view can be queried to verify that the database is in FORCE LOGGING mode. Similarly, the FORCE_LOGGING column of dba_tablespaces view provides the same logging information for each tablespace.
select force_logging from v$database;
select force_logging from dba_tablespaces;
Also see these important notes on running DML in nologging mode:
Force logging mode is persistent across database startup, but it is not maintained when the control file is recreated unless the FORCE LOGGING clause is specified in the create controlfile statement. Also, a tablespace in the FORCE LOGGING mode, when transported to another database, does not maintain this mode.
In these situations, the FORCE LOGGING mode would have to be re-enabled. The primary database should remain in FORCE LOGGING mode as long as there is at least one Oracle instance in use.  Putting a database in FORCE LOGGING mode will have some performance impact.

FORCE LOGGING Option

FORCE LOGGING Option
In Oracle9i release 2, the FORCE LOGGING option was introduced. The FORCE LOGGING option can be set at the database level or the tablespace level. The precedence is from database to tablespace. If a tablespace is created or altered to have FORCE LOGGING enabled, any change in that tablespace will go into the redo log and be usable for recovery.
Similarly, if a database is created or altered to have the FORCE LOGGING enabled, any change across the database, with exception of temporary segments and temporary tablespace, will be available in redo logs for recovery.
The FORCE LOGGING option can be set at database creation time or later using the alter database command.
To set FORCE LOGGING during the database creation, specify the following:
CREATE DATABASE <dbname>…..FORCE LOGGING… 
To enable FORCE LOGGING after the database is created, use the following command:
ALTER DATABASE FORCE LOGGING;
The FORCE LOGGING option is the safest method to ensure that all the changes made in the database will be captured and available for recovery in the redo logs. Force logging is the new feature added to the family of logging attributes.
Before the existence of FORCE LOGGING, Oracle provided logging and nologging options. These two options have higher precedence at the schema object level than the tablespace level; therefore, it was possible to override the logging settings at the tablespace level with nologging setting at schema object level.
FORCE LOGGING Option
The database or tablespaces in the database should be put into FORCE LOGGING mode before creating the backup for the standby database. Either a database or all of its tablespaces should be put into this mode but not both.
The following statement will put a tablespace in FORCE LOGGING mode:
ALTER TABLESPACE <tablespace name> FORCE LOGGING;
The FORCE LOGGING mode can be cancelled at the database level using the following statement:
ALTER DATABASE NO FORCE LOGGING;
The FORCE LOGGING mode can be cancelled at the tablespace level using the following statement:
ALTER TABLESPACE <tablespace name> NO FORCE LOGGING;
Temporary tablespaces and temporary segments have no effect during FORCE LOGGING mode because these objects do not generate any redo. Undo tablespaces are in FORCE LOGGING mode by default, so they cannot be put into FORCE LOGGING mode. Oracle will generate an error if an attempt is made to put a temporary tablespace or undo tablespace into FORCE LOGGING mode.
The FORCE_LOGGING column of v$database view can be queried to verify that the database is in FORCE LOGGING mode. Similarly, the FORCE_LOGGING column of dba_tablespaces view provides the same logging information for each tablespace.
select force_logging from v$database;
select force_logging from dba_tablespaces;
Also see these important notes on running DML in nologging mode:
Force logging mode is persistent across database startup, but it is not maintained when the control file is recreated unless the FORCE LOGGING clause is specified in the create controlfile statement. Also, a tablespace in the FORCE LOGGING mode, when transported to another database, does not maintain this mode.
In these situations, the FORCE LOGGING mode would have to be re-enabled. The primary database should remain in FORCE LOGGING mode as long as there is at least one Oracle instance in use.  Putting a database in FORCE LOGGING mode will have some performance impact.

Wednesday 22 January 2014

Oracle shared server processes generates huge trace files in diag > rdbms > trace directory

Oracle shared server processes generates huge trace files in diag > rdbms > trace directory

There is Oracle bug that can cause shared server processes to generate huge trace files in the diag > rdbms > trace directory.

If you are using RAC this problem usually appears only in one node at the time.

Inside trace files you can find following errors (reparse is main thing to watch):
-----------
kksfbc: entering reparse diagnosis mode for xsc:0x7381dcbd0
...

and

 Check cursor failed ctx=(nil) xscflg=110632 ctxflg=0 flg=102
 kksSearchChildList: no suitable child found (hash_match=0)
...

and

Current Cursor Sharing Diagnostics Nodes:
  Child Node: 2  ID=40 reason=Bind mismatch(8) size=4x4
...

-----------

This bug itself is not fatal if you have enough free space in disk where diag directory is.
But if that disk run out of free space then alert log will start to log errors about the trace file writing.

And if the diag directory is in the default place there will be probably other log and files in the same disk which will cause you more problems when the disk is full. For example listener log.

There is fix for this bug in MOS (My Oracle Support) and also more information (MOS ID):
10373013.8   (you can check patches from Patch & Updates > search  10373013)


NOTE! The bug is already fixed in 11.2.0.3 and 12.1 versions. And some patch updates of 11.2.0.2 . Quick workaround for this problem is empty trace files for example this way in Linux:
cat /dev/null > trace_file_name.trc  . If you just delete the file it will not free used space until you restart database. But above emptying will free space immediately. But this is just workaround and error can hit again later. So if you get these often it is better to use bug fix or upgrade the database into new version.

Monday 6 January 2014

Oracle11g RAC On Linux