Tuesday, July 27, 2010

ORA-00845: MEMORY_TARGET not supported on this system

While creating a startup database using dbca the database (11g) creation GUI gives error message in a pop up window,
ORA-00845: MEMORY_TARGET not supported on this system
from where you can ignore the error message.
The similar scenario also occur whenever you try to start your database then startup shows error message like below.

SQL> STARTUP
ORA-00845: MEMORY_TARGET not supported on this system

Cause of the Problem
•Starting from Oracle 11g the automatic memory management feature is now defined with parameter MEMORY_TARGET and MEMMORY_MAX_TARGET.
•On linux file system the shared memory need to be mounted on /dev/shm directory on the operating system.
•And the size of /dev/shm needs to be greater than MEMORY_TARGET or MEMMORY_MAX_TARGET.
•The AMM (Automatic Memory Management) now in 11g manages both SGA and PGA together by MMAN process.
•The MEMORY_TARGET parameter in 11g comes for (SGA_TARGET+PGA_AGGREGATE_TARGET) which was in 10g.
•And MEMORY_MAX_TARGET parameter in 11g comes instead of SGA_MAX_TARGET parameter which was in 10g.
•The ORA-00845:can arises for the following two reasons on linux system.

1)If the shared memory which is mapped to /dev/shm directory is less than the size of MEMORY_TARGET or MEMORY_MAX_TARGET.
or,
2)If the shared memory is not mapped to /dev/shm directory.

Solution of the Problem:

Make sure /dev/shm is properly mounted. You can see it by,
#df -h or #df -k command.
The output should be similar like

$ df -k
Filesystem            Size  Used Avail Use% Mounted on
...
shmfs                 1G    512M 512M  50% /dev/shm


We see here for /dev/shm we have assigned 1G memory. Now if you set MEMORY_TARGET more than 1G then above ORA-845 will arise. For example if you have MEMORY_TARGET or MEMORY_MAX_TARGET set to 12G then you can mount shared memory to 13g like below.
As a root user,
# mount -t tmpfs shmfs -o size=13g /dev/shm
In order to make the settings persistence so that it will affect after restarting machine add an entry in /etc/fstab similar to the following:

shmfs /dev/shm tmpfs size=13g 0

Friday, July 23, 2010

Redo Logs etc


The most crucial and vital structure for recovery operations is the online redo log, which consists of two or more pre-allocated files that store all changes made to the database as they occur. Every instance of an Oracle database has an associated online redo log to protect the database in case of an instance failure.

What is a redo log thread?

Each database instance has its own online redo log groups. These online redo log groups, multiplexed or not, are called an instance's thread of online redo. In typical configurations, only one database instance accesses an Oracle database, so only one thread is present. When running Oracle Real Application Clusters, however, two or more instances concurrently access a single database and each instance has its own thread. The relation ship between Oracle Instance and Database is many-to-one. More than one Instance can access a Database. This kind of configuration is called Parallel Server Configuration.

What those files contain??

Online redo log files are filled with redo records. A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the database. For example, if you change a salary value in an employee table, you generate a redo record containing change vectors that describe changes to the data segment block for the table, the rollback segment data block, and the transaction table of the rollback segments.

Redo entries record data that you can use to reconstruct all changes made to the database, including the rollback segments. Therefore, the online redo log also protects rollback data.

When you recover the database using redo data, Oracle reads the change vectors in the redo records and applies the changes to the relevant blocks.

Redo records are buffered in a circular fashion in the redo log buffer of the SGA and are written to one of the online redo log files by the Oracle background process Log Writer (LGWR). Whenever a transaction is committed, LGWR writes the transaction's redo records from the redo log buffer of the SGA to an online redo log file, and a system change number (SCN) is assigned to identify the redo records for each committed transaction. Only when all redo records associated with a given transaction are safely on disk in the online logs is the user process notified that the transaction has been committed.

Redo records can also be written to an online redo log file before the corresponding transaction is committed. If the redo log buffer fills, or another transaction commits, LGWR flushes all of the redo log entries in the redo log buffer to an online redo log file, even though some redo records may not be committed. If necessary, Oracle can roll back these changes.

How Oracle Writes to the Online Redo Log?

The online redo log of a database consists of two or more online redo log files. Oracle requires
a minimum of two files to guarantee that one is always available for writing while the other is being archived (if in ARCHIVELOG mode).

LGWR writes to online redo log files in a circular fashion. When the current online redo log file fills, LGWR begins writing to the next available online redo log file. When the last available online redo log file is filled, LGWR returns to the first online redo log file and writes to it, starting the cycle again. Filled online redo log files are available to LGWR for reuse depending on whether archiving is enabled. There can be contention between filling up of the on line redo log files and archiving of the redo log files, if they filled faster than they are written to the archived log file. This because online log file written in Oracle blocks and archives are written in OS blocks

If archiving is disabled (NOARCHIVELOG mode), a filled online redo log file is available once the changes recorded in it have been written to the data files.

If archiving is enabled (ARCHIVELOG mode), a filled online redo log file is available to LGWR once the changes recorded in it have been written to the datafiles and once the file has been archived.

What is meant by Active (Current) and Inactive Online Redo Log Files???

At any given time, Oracle uses only one of the online redo log files to store redo records written from the redo log buffer. The online redo log file that LGWR is actively writing to is called the current online redo log file.

Online redo log files that are required for instance recovery are called active online redo log files. Online redo log files that are not required for instance recovery are called inactive.

If you have enabled archiving (ARCHIVELOG mode), Oracle cannot reuse or overwrite an active online log file until ARCn has archived its contents. If archiving is disabled (NOARCHIVELOG mode), when the last online redo log file fills writing continues by overwriting the first available active file.

Which parameter influences the log switches??

LOG_CHECKPOINT_TIMEOUT specifies the amount of time, in seconds, that has passed since the incremental checkpoint at the position where the last write to the redo log (sometimes called the tail of the log) occurred. This parameter also signifies that no buffer will remain dirty (in the cache) for more than integer seconds. This is time based switching of the log files.

LOG_CHECKPOINT_INTERVAL specifies the frequency of checkpoints in terms of the number of redo log file blocks that can exist between an incremental checkpoint and the last block written to the redo log. This number refers to physical operating system blocks, not database blocks. This block based switching of the log files.

How to add a redo log file group and member??

Suppose you are to add group 5 with 2 members the command is:

ALTER DATABASE
ADD LOGFILE GROUP 10
('c:\oracle\oradata\whs\redo\redo_05_01.log',
'd:\oracle\oradata\whs\redo\redo_05_02.log')
SIZE 100M;

This command is used to add another member to the group already existing.

ALTER DATABASE ADD LOGFILE MEMBER
'c:\oracle\oradata\whs\redo\redo_05_03.log'
TO GROUP 5;

How to move a redo log file from one destination to another destination??

01. Shutdown database normal/immediate but not abort.
Shutdown immediate;
02. Copy the online redo log files to the new location.
Unix use mv command
Windows move command
03. Startup MOUNT database logging in as sysdba (do not open the database)
startup mount pfile=

04. Issue the following statement
Ex
You are changing the file from c:\oracle\oradata\redologs to c:\oracle\oradata\whs\redologs and like wise on d:\ drive.

ALTER DATABASE
RENAME FILE
'c:\oracle\oradata\redologs\redo_01_01.log',
'd:\oracle\oradata\redologs\redo_01_02.log'
TO
'c:\oracle\oradata\whs\redologs\redo_01_01.log',
'd:\oracle\oradata\whs\redologs\redo_01_02.log'
/

05. Open the database

alter database open;

How to drop a redo log file group or/and member???

To drop an online redo log group, you must have the ALTER DATABASE system privilege. Before dropping an online redo log group, consider the following restrictions and precautions:

An instance requires at least two groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.)

You can drop an online redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur.

Make sure an online redo log group is archived (if archiving is enabled) before dropping it. To see whether this has happened, use the V$LOG view.

SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;

GROUP# ARC STATUS
--------- --- ----------------
1 YES ACTIVE
2 NO CURRENT
3 YES INACTIVE
4 YES INACTIVE


Drop an online redo log group with the SQL statement ALTER DATABASE with the DROP LOGFILE clause.

The following statement drops redo log group number 3:

ALTER DATABASE DROP LOGFILE GROUP 3;

When an online redo log group is dropped from the database, and you are not using the Oracle Managed Files feature, the operating system files are not deleted from disk. The control files of the associated database are updated to drop the members of the group from the database structure. After dropping an online redo log group, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files.
To drop an online redo log member, you must have the ALTER DATABASE system privilege. Consider the following restrictions and precautions before dropping individual online redo log members:

It is permissible to drop online redo log files so that a multiplexed online redo log becomes temporarily asymmetric. For example, if you use duplexed groups of online redo log files, you can drop one member of one group, even though all other groups have two members each. However, you should rectify this situation immediately so that all groups have at least two members, and thereby eliminate the single point of failure possible for the online redo log.
An instance always requires at least two valid groups of online redo log files, regardless of the number of members in the groups. (A group is one or more members.) If the member you want to drop is the last valid member of the group, you cannot drop the member until the other members become valid. To see a redo log file's status, use the V$LOGFILE view. A redo log file becomes INVALID if Oracle cannot access it. It becomes STALE if Oracle suspects that it is not complete or correct. A stale log file becomes valid again the next time its group is made the active group. You can drop an online redo log member only if it is not part of an active or current group. If you want to drop a member of an active group, first force a log switch to occur. Make sure the group to which an online redo log member belongs is archived (if archiving is enabled) before dropping the member. To see whether this has happened, use the V$LOG view.

To drop specific inactive online redo log members, use the ALTER DATABASE statement with the DROP LOGFILE MEMBER clause.

The following statement drops the redo log 'redo_01_01.log' for group 01 member 01

ALTER DATABASE DROP LOGFILE MEMBER 'c:\oracle\oradata\redologs\redo_01_01.log'

When an online redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure. After dropping an online redo log file, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped online redo log file.

To drop a member of an active group, you must first force a log switch and as a result that member becomes inactive.

How can I force the log switch??
ALTER SYSTEM SWITCH LOGFILE;

How can I Clear an Online Redo Log File??

ALTER DATABASE CLEAR LOGFILE GROUP 3;

This statement overcomes two situations where dropping redo logs is not possible:
(1) If there are only two log groups
(2) The corrupt redo log file belongs to the current group
(3) If the corrupt redo log file has not been archived, use the UNARCHIVED keyword in the statement.

ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3;

Alter Session privilege (ORA-01031: insufficient privileges)

While trying to trace an event from user Scott who has connect and resource role privileges i have faced with ORA-01031 error when running the command “alter session set events ’10132 trace name context forever, level 8′;”. After a bit googling i saw that the connect role has changed for 10G R2 and does not have alter session sys privilege. Official Document says “beginning in Oracle Database 10g Release 2 (10.2), the CONNECT role has only the CREATE SESSION privilege, all other privileges are removed.”

The error stack (code ran on 10.2.0.1 Express Edition)

SQL> connect scott/tiger;
Connected.

scott@XE> alter session set sql_trace=true;
alter session set sql_trace=true
*
ERROR at line 1:
ORA-01031: insufficient privileges

scott@XE> alter session set events ’10132 trace name context forever, level 8′;

ERROR:
ORA-01031: insufficient privileges
scott@XE> select * from USER_ROLE_PRIVS;

USERNAME GRANTED_ROLE ADM DEF OS_
—————————— —————————— — — —
SCOTT CONNECT NO YES NO
SCOTT RESOURCE NO YES NO
scott@XE> select * from USER_SYS_PRIVS;

USERNAME PRIVILEGE ADM
—————————— —————————————- —
SCOTT CREATE VIEW NO
SCOTT UNLIMITED TABLESPACE NO
SCOTT CREATE SESSION NO


Investigation for the source of the error

In 10.2.0.1

SQL> select version from v$instance;

VERSION
—————–
10.2.0.1.0

SQL> select * from DBA_SYS_PRIVS where privilege=’ALTER SESSION’ ;

GRANTEE PRIVILEGE ADM
—————————— —————————————- —
DBA ALTER SESSION YES
RECOVERY_CATALOG_OWNER ALTER SESSION NO
CTXSYS ALTER SESSION NO
HR ALTER SESSION NO
FLOWS_020100 ALTER SESSION NO
XDB ALTER SESSION NO

6 rows selected.

SQL> select * from ROLE_SYS_PRIVS where role=’CONNECT’;

no rows selected

In 10.1.0.2

sys@XE> select version from v$instance;

VERSION
—————–
10.1.0.2.0

sys@XE> select * from DBA_SYS_PRIVS where privilege=’ALTER SESSION’ ;

GRANTEE PRIVILEGE ADM
—————————— —————————————- —
IX ALTER SESSION NO
DBA ALTER SESSION YES
SYS ALTER SESSION NO
XDB ALTER SESSION NO
DMSYS ALTER SESSION NO
WKSYS ALTER SESSION NO
CONNECT ALTER SESSION NO
PERFSTAT ALTER SESSION NO
OLAP_USER ALTER SESSION NO
RECOVERY_CATALOG_OWNER ALTER SESSION NO

10 rows selected.

sys@XE> select * from ROLE_SYS_PRIVS where role=’CONNECT’;

ROLE PRIVILEGE ADM
—————————— —————————————- —
CONNECT CREATE VIEW NO
CONNECT CREATE TABLE NO
CONNECT ALTER SESSION NO
CONNECT CREATE CLUSTER NO
CONNECT CREATE SESSION NO
CONNECT CREATE SYNONYM NO
CONNECT CREATE SEQUENCE NO
CONNECT CREATE DATABASE LINK NO

8 rows selected.

After giving alter session privilage to scott the error resolved;
sys@XE> GRANT ALTER SESSION TO SCOTT;

Grant succeeded.

sys@XE>connect scott/tiger

Connected.
scott@XE> alter session set sql_trace=true;

Session altered.

scott@XE> alter session set events ’10132 trace name context forever, level 8′;

Session altered.
scott@XE> select * from USER_SYS_PRIVS;

USERNAME PRIVILEGE ADM
—————————— —————————————- —
SCOTT CREATE VIEW NO
SCOTT UNLIMITED TABLESPACE NO
SCOTT ALTER SESSION NO
SCOTT CREATE SESSION NO


How to Remove and Drop Datafiles from Tablespace in Oracle Database

Oracle RDBMS databases stores data logically in the form of tablespaces and physically in the form of data files. The datafiles are added to the tablespaces as database space requirement grows bigger. However, there are several reasons you may want to remove or delete data files from a tablespace. Such as accidentally add a wrongly sized or unwanted datafile to a tablespace, or the data space usage has became smaller and some data files want to be removed, or attempt to recover Oracle database which fails to start due to missing or corrupted datafiles by removing them, Oracle does not provide an easy way or user interface to delete or drop datafiles from a tablespace. Once a datafile is made part of a tablespace, it can no longer be detached or removed from the tablespace, albeit there are several workarounds.

How to Completely Drop the Whole Tablespace with All Datafiles

The easiest way to drop a or multiple datafiles is by dropping the entire tablespace together with its datafiles. If you no longer need the data contents of the tablespace, the following command will drop the tablespace, the datafile, and the tablespace’s contents from the data dictionary. All of the objects that where contained in that tablespace are permanently removed.

DROP TABLESPACE INCLUDING CONTENTS AND DATAFILES;

If you don’t specify “AND DATAFILES”, Oracle will not drop the physical datafile after the DROP TABLESPACE command, but you can always delete the files from operating system shell (If the data files are locked, restart the server).

But always perform a backup of the database, as the “DROP TABLESPACE” command is irreversible. It’s also a good practice to check and identify how many datafiles a tablespace actually has before performing the drop action in order to avoid mistake.

How to Check How Many Datafiles a Tablespace Has


To determine and identify all datafiles that link to a tablespace, use the following query, with tablespace name in capital letter:

SELECT file_name, tablespace_name
FROM dba_data_files
WHERE tablespace_name =’’;

If a tablespace contains multiple datafiles, and you just want to drop one or some of the datafiles and keep the remaining datafiles together with the objects and contents, the objects or data must be exported for the affected table space. Once exported, the tablespace can be dropped with above “DROP TABLESPACE” command. Then, recreate the tablespace with the datafile(s) required (that you initially want to keep), and then import the objects into the recreated tablespace.

If one or more datafiles is missing after a recovery process or accidental deletion, you can use ALTER DATABASE DATAFILE OFFLINE DROP command to make the datafile offline so that database can starts up after which the troubled tablespace can be dropped. (See instruction to recover from missing datafiles.)

How to Resize a Datafile to Minimum Size

Another alternative to drop the datafile is by shrinking the size of datafile instead of dropping. This option is only possible if there is no extents in the datafile. If there are none, it’s possible to resize the data file down to a very small file (2 blocks), where Oracle database will no longer create any extent in the datafile. However, this workaround does not remove the datafile from tablespace nor delete the file physically, but it reduce the risk as no data will be truncated as only empty blocks are reduced. It simply makes it unusable and takes up almost close to no disk space. To resize a datafile, use the following query:

alter database datafile ‘’ resize 8M;

How to Determine the Extents inside Datafile


To check and determine the extents that exists in a datafile, use this query:

SELECT owner, segment_name
FROM dba_extents a, dba_data_files b
WHERE a.file_id = b.file_id
AND b.file_name = ‘

Drop Oracle Rule Set

I was trying to drop a user but couldn’t as it had QUEUE tables, queue rule andrule-set. So it required to drop ruleset before dropping the user. 
I tried the following and failed with a misleading error.
 
exec DBMS_RULE_ADM.DROP_RULE_SET(rule_set_name => ' AQ$WF_DEFERRED_QUEUE_M$1,delete_rules => false);

ORA-24170
string.string is created by AQ, cannot be dropped directly
Cause: This object is created by AQ, thus cannot be dropped directly
Action: use dbms_aqadm.drop_subscriber to drop the object

I kept searching how to drop subscriber, but nothing worked.
Here is the solution that worked:

select * from user_objects;

OBJECT_TYPE      OBJECT_NAME                              STATUS
RULE                  AQ$WF_DEFERRED_QUEUE_M$1      VALID
RULE SET            AQ$WF_DEFERRED_QUEUE_M$1      INVALID

1.Set the following event at session level:

grant alter session to ;
alter session set events '25475 trace name context forever, level 2';

2. Drop rule:

execute DBMS_RULE_ADM.DROP_RULE('.AQ$WF_DEFERRED_QUEUE_M$1',TRUE);
commit;

3.Drop rule set :
execute DBMS_RULE_ADM.DROP_RULE_SET('AQ$WF_DEFERRED_QUEUE_M$1');
commit;

4. Connect as SYSTEM or SYSDBA and try to drop user again.
drop user cascade;

Thursday, July 22, 2010

Recreate SYSMAN Schema

Following is the hands on procedure to recreate sysman schema.

SQL> drop user sysman cascade;
User dropped.

SQL> drop role MGMT_USER;
Role dropped.

SQL> drop user MGMT_VIEW cascade;
User dropped.

SQL> drop public synonym MGMT_TARGET_BLACKOUTS;
Synonym dropped.

SQL> drop public synonym SETEMVIEWUSERCONTEXT;
Synonym dropped.

SQL> exit

oracle@lab:/opt/app/oracle/product/11/db_1/bin> $ORACLE_HOME/sysman/admin/emdrep/bin/RepManager antarctic 2483 sprod -action drop
Enter SYS user’s password :
Enter repository user name : sysman
Getting temporary tablespace from database…
Found temporary tablespace: TEMP
Checking SYS Credentials … rem error switch
OK.
rem error switch
Dropping the repository..
Quiescing DB … Done.
Checking for Repos User … Does not Exist.
Repos User does not exist … Continuing with cleanup …
Dropping Roles/Synonymns/Tablespaces … Done.
Unquiescing DB … Done.
Dropped Repository Successfully.
oracle@lab:/opt/app/oracle/product/11/db_1/bin> ./emca -repos create

STARTED EMCA at Jul 20, 2020 4:41:05 PM
EM Configuration Assistant, Version 11.1.0.6.0 Production
Copyright (c) 2003, 2005, Oracle. All rights reserved.

Enter the following information:
Database SID: sprod
Listener port number: 2483
Password for SYSMAN user:
Do you wish to continue? [yes(Y)/no(N)]: y
Jul 20, 2010 4:41:38 PM oracle.sysman.emcp.EMConfig perform
INFO: This operation is being logged at /opt/oracle/cfgtoollogs/emca/sprod/emca_2010_07_20_16_41_05.log.
Jul 20, 2010 4:41:39 PM oracle.sysman.emcp.EMReposConfig createRepository
INFO: Creating the EM repository (this may take a while) …
Jul 20, 2010 4:54:44 PM oracle.sysman.emcp.EMReposConfig invoke
INFO: Repository successfully created
Enterprise Manager configuration completed successfully
FINISHED EMCA at Jul 20, 2010 4:54:44 PM

CHECK
SQL> analyze table sysman.mgmt_metrics_1hour validate structure cascade;

Table analyzed.

SQL> analyze index sysman.mgmt_metrics_1hour_pk validate structure;

Index analyzed.

Wednesday, July 21, 2010

PLS-00201: identifier 'DBMS_AQADM' must be declared


While trying to drop QUEUE table, the subjectlined issue may arrise.
The error a bit confusing. It's nothing but a permission issue.

Select OWNER, IOT_NAME, IOT_TYPE from dba_tables

OWNER    TABLE_NAME                 IOT_NAME                    IOT_TYPE
SCOTT    SYS_IOT_OVER_74142    AQ$_QUEUE_TABLE    IOT_OVERFLOW

Grant  EXECUTE ON DBMS_AQADM to SCOTT
exec DBMS_AQADM.DROP_QUEUE_TABLE(queue_table=>'AQ$_QUEUE_TABLE',force => TRUE);

Friday, July 16, 2010

Global Temporary Tables

Temporary tables were introduced in Oracle 8i. There are two types of temporary tables, GLOBAL TEMPORARY and TEMPORARY.
  • a GLOBAL TEMPORARY table is one whose data is visible to all sessions
     a TEMPORARY table has contents only visible to the session that is using it
A temporary table can have session-specific or transaction specific data depending on how the ON COMMIT clause is used in the table's definition. The temporary table doesn't go away when the session or sessions are finished with it; however, the data in the table is removed.
Temporary tables do not have the same redo generation as other tables. The data in the tables are temporary and do not have to be rebuilt in the event of a failure.
Here is an example creation of both a preserved and deleted temporary table:

SQL>   CREATE GLOBAL TEMPORARY TABLE test6 (
  2     starttestdate DATE,
  3     endtestdate DATE,
  4     results NUMBER)
  5* ON COMMIT PRESERVE ROWS
SQL> /

Table created.

SQL> desc test6
 Name                            Null?    Type
 ------------------------------- -------- ----
 STARTTESTDATE                            DATE
 ENDTESTDATE                              DATE
 RESULTS                                  NUMBER


SQL> CREATE GLOBAL TEMPORARY TABLE test7 (
  2     starttestdate DATE,
  3     endtestdate DATE,
  4     results NUMBER)
  5  ON COMMIT DELETE ROWS
  6  /

Table created.

SQL> desc test7
 Name                            Null?    Type
 ------------------------------- -------- ----
 STARTTESTDATE                            DATE
 ENDTESTDATE                              DATE
 RESULTS                                  NUMBER


SQL> insert into test6 values (sysdate, sysdate+1, 100);

1 row created.

SQL> commit;

Commit complete.

SQL> insert into test7 values (sysdate, sysdate+1, 100);

1 row created.

SQL> select * from test7;

STARTTEST ENDTESTDA    RESULTS
--------- --------- ----------
29-MAR-99 30-MAR-99        100

SQL> commit;

Commit complete.

SQL> select * from test6;

STARTTEST ENDTESTDA    RESULTS
--------- --------- ----------
29-MAR-99 30-MAR-99        100

SQL> select * from test7;

no rows selected

SQL> 
The items to notice in the example above are:
  • With the PRESERVE option, the data is kept after a commit while with the DELETE option the data is removed from the table when a COMMIT occurs.
     
  • Even with the GLOBAL option set and select permission granted to public on the temporary table we couldn't see the data in that table from another session. However, we could however perform a DESCRIBE on the table and insert values into it, which then the owner of the temporary table couldn't see.

Thursday, July 15, 2010

Table Fragmentation

When rows are not stored contiguously, or if rows are split onto more than one block, performance decreases because these rows require additional block accesses.

Note that table fragmentation is different from file fragmentation. When a lot of DML operations are applied on a table, the table will become fragmented because DML does not release free space from the table below the HWM.

HWM is an indicator of USED BLOCKS in the database. Blocks below the high water mark (used blocks) have at least once contained data. This data might have been deleted. Since Oracle knows that blocks beyond the high water mark don't have data, it only reads blocks up to the high water mark when doing a full table scan.

DDL statement always resets the HWM.

How to find table fragmentation?

SQL> select count(*) from big1;
1000000 rows selected.
SQL> delete from big1 where rownum <= 300000; 300000 rows deleted. SQL> commit;
Commit complete.
SQL> update big1 set object_id = 0 where rownum <=350000; 342226 rows updated. SQL> commit;
Commit complete.
SQL> exec dbms_stats.gather_table_stats('SCOTT','BIG1');
PL/SQL procedure successfully completed.

Table size (with fragmentation)

SQL> select table_name,round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';

TABLE_NAME size
------------------------------ ------------------------------------------
BIG1 72952kb

Actual data in table:

SQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';

TABLE_NAME size
------------------------------ ------------------------------------------
BIG1 30604.2kb

Note = 72952 - 30604 = 42348 Kb is wasted space in table

The difference between two values is 60% and Pctfree 10% (default) - so, the table has 50% extra space which is wasted because there is no data.

How to reset HWM / remove fragemenation?

For that we need to reorganize the fragmented table.

We have four options to reorganize fragmented tables:

1. alter table ... move + rebuild indexes
2. export / truncate / import
3. create table as select ( CTAS)
4. dbms_redefinition

Option: 1 "alter table ... move + rebuild indexes"

SQL> alter table BIG1 move;
Table altered.

SQL> select status,index_name from user_indexes
2 where table_name = 'BIG1';

STATUS INDEX_NAME
-------- ------------------------------
UNUSABLE BIGIDX

SQL> alter index bigidx rebuild;
Index altered.

SQL> select status,index_name from user_indexes
2 where table_name = 'BIG1';

STATUS INDEX_NAME
-------- ------------------------------
VALID BIGIDX


SQL> exec dbms_stats.gather_table_stats('SCOTT','BIG1');
PL/SQL procedure successfully completed.

SQL> select table_name,round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';

TABLE_NAME size
------------------------------ ------------------------------------------
BIG1 38224kb

SQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';

TABLE_NAME size
------------------------------ ------------------------------------------
BIG1 30727.37kb

Option: 2 “Create table as select”


SQL> create table big2 as select * from big1;
Table created.

SQL> drop table big1 purge;
Table dropped.

SQL> rename big2 to big1;
Table renamed.

SQL> exec dbms_stats.gather_table_stats('SCOTT','BIG1');
PL/SQL procedure successfully completed.

SQL> select table_name,round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';

TABLE_NAME size
------------------------------ ------------------------------------------
BIG1 85536kb

SQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';

TABLE_NAME size
------------------------------ ------------------------------------------
BIG1 68986.97kb

SQL> select status from user_indexes
2 where table_name = 'BIG1';

no rows selected

SQL> --Note we need to create all indexes.

Option: 3 "export / truncate / import"

SQL> select table_name, round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';

TABLE_NAME size
------------------------------ ------------------------------------------
BIG1 85536kb

SQL> select table_name, round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';

TABLE_NAME size
------------------------------ ------------------------------------------
BIG1 42535.54kb

SQL> select status from user_indexes where table_name = 'BIG1';

STATUS
--------
VALID

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - Pr
oduction
With the Partitioning, OLAP and Data Mining options

C:\>exp scott/tiger@Orcl file=c:\big1.dmp tables=big1

Export: Release 10.1.0.5.0 - Production on Sat Jul 28 16:30:44 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options
Export done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set

About to export specified tables via Conventional Path ...
. . exporting table BIG1 468904 rows exported
Export terminated successfully without warnings.

C:\>sqlplus scott/tiger@orcl

SQL*Plus: Release 10.1.0.5.0 - Production on Sat Jul 28 16:31:12 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> truncate table big1;

Table truncated.

SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - Pr
oduction
With the Partitioning, OLAP and Data Mining options

C:\>imp scott/tiger@Orcl file=c:\big1.dmp ignore=y

Import: Release 10.1.0.5.0 - Production on Sat Jul 28 16:31:54 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - Produc
tion
With the Partitioning, OLAP and Data Mining options

Export file created by EXPORT:V10.01.00 via conventional path
import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
. importing SCOTT's objects into SCOTT
. . importing table "BIG1" 468904 rows imported
Import terminated successfully without warnings.

C:\>sqlplus scott/tiger@orcl

SQL*Plus: Release 10.1.0.5.0 - Production on Sat Jul 28 16:32:21 2007

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.1.0.5.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> select table_name, round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';

TABLE_NAME size
------------------------------ ------------------------------------------
BIG1 85536kb

SQL> select table_name, round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';

TABLE_NAME size
------------------------------ ------------------------------------------
BIG1 42535.54kb

SQL> exec dbms_stats.gather_table_stats('SCOTT','BIG1');

PL/SQL procedure successfully completed.

SQL> select table_name, round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';

TABLE_NAME size
------------------------------ ------------------------------------------
BIG1 51840kb

SQL> select table_name, round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'BIG1';

TABLE_NAME size
------------------------------ ------------------------------------------
BIG1 42542.27kb

SQL> select status from user_indexes where table_name = 'BIG1';

STATUS
--------
VALID

SQL> exec dbms_redefinition.can_redef_table('SCOTT','BIG1',-
> dbms_redefinition.cons_use_pk);

PL/SQL procedure successfully completed.

Option: 4 "dbms_redefinition"

SQL> create table TABLE1 (
2 no number,
3 name varchar2(20) default 'NONE',
4 ddate date default SYSDATE);
Table created.

SQL> alter table table1 add constraint pk_no primary key(no);
Table altered.

SQL> begin
2 for x in 1..100000 loop
3 insert into table1 ( no , name, ddate)
4 values ( x , default, default);
5 end loop;
6 end;
7 /
PL/SQL procedure successfully completed.

SQL> create or replace trigger tri_table1
2 after insert on table1
3 begin
4 null;
5 end;
6 /
Trigger created.

SQL> select count(*) from table1;

COUNT(*)
----------
100000

SQL> delete table1 where rownum <= 50000; 50000 rows deleted. SQL> commit;
Commit complete.

SQL> exec dbms_stats.gather_table_stats('SCOTT','TABLE1');
PL/SQL procedure successfully completed.

SQL> select table_name, round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'TABLE1';

TABLE_NAME size
------------------------------ ------------------------------------------
TABLE1 2960kb

SQL> select table_name, round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'TABLE1';

TABLE_NAME size
------------------------------ ------------------------------------------
TABLE1 822.69kb

SQL> --Minimum Privs required "DBA" role or "SELECT" on dbms_redefinition pkg
SQL> --First check table is condidate for redefinition.
SQL>
SQL> exec sys.dbms_redefinition.can_redef_table('SCOTT',-
> 'TABLE1',-
> sys.dbms_redefinition.cons_use_pk);

PL/SQL procedure successfully completed.

SQL> --After verifying that the table can be redefined online, you manually crea
te an empty interim table (in the same schema as the table to be redefined)
SQL>
SQL> create table TABLE2 as select * from table1 WHERE 1 = 2;
Table created.

SQL> exec sys.dbms_redefinition.start_redef_table ( 'SCOTT',-
> 'TABLE1',-
> 'TABLE2');
PL/SQL procedure successfully completed.

SQL> --This procedure keeps the interim table synchronized with the original tab
le.
SQL>
SQL> exec sys.dbms_redefinition.sync_interim_table ('SCOTT',-
> 'TABLE1',-
> 'TABLE2');
PL/SQL procedure successfully completed.

SQL> --Create PRIMARY KEY on interim table(TABLE2)
SQL> alter table TABLE2
2 add constraint pk_no1 primary key (no);
Table altered.

SQL> create trigger tri_table2
2 after insert on table2
3 begin
4 null;
5 end;
6 /
Trigger created.

SQL> --Disable foreign key on original table if exists before finish this proces
s.
SQL>
SQL> exec sys.dbms_redefinition.finish_redef_table ( 'SCOTT',-
> 'TABLE1',-
> 'TABLE2');
PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats('SCOTT','TABLE1');
PL/SQL procedure successfully completed.

SQL> select table_name, round((blocks*8),2)||'kb' "size"
2 from user_tables
3 where table_name = 'TABLE1';

TABLE_NAME size
------------------------------ ------------------------------------------
TABLE1 1376kb

SQL> select table_name, round((num_rows*avg_row_len/1024),2)||'kb' "size"
2 from user_tables
3 where table_name = 'TABLE1';

TABLE_NAME size
------------------------------ ------------------------------------------
TABLE1 841.4kb

SQL> select status,constraint_name
2 from user_constraints
3 where table_name = 'TABLE1';

STATUS CONSTRAINT_NAME
-------- ------------------------------
ENABLED PK_NO1

SQL> select status ,trigger_name
2 from user_triggers
3 where table_name = 'TABLE1';

STATUS TRIGGER_NAME
-------- ------------------------------
ENABLED TRI_TABLE2

SQL> drop table TABLE2 PURGE;
Table dropped.

Wednesday, July 14, 2010

Find Full Table Scan Queries


This query provides a list of queries (1,000 characters only) involved in full table scans. The total number of rows and blocks for each table is also displayed so that you can determine whether the full scan is degrading performance.
SELECT   sp.object_owner, sp.object_name,
         (SELECT sql_text
            FROM v$sqlarea sa
           WHERE sa.address = sp.address
             AND sa.hash_value = sp.hash_value) sqltext,
         (SELECT executions
            FROM v$sqlarea sa
           WHERE sa.address = sp.address
             AND sa.hash_value = sp.hash_value) no_of_full_scans,
         (SELECT    LPAD (NVL (TRIM (TO_CHAR (num_rows)), ' '),
                          15,
                          ' '
                         )
                 || ' | '
                 || LPAD (NVL (TRIM (TO_CHAR (blocks)), ' '), 15, ' ')
                 || ' | '
                 || BUFFER_POOL
            FROM dba_tables
           WHERE table_name = sp.object_name AND owner = sp.object_owner)
                                                           "rows|blocks|pool"
    FROM v$sql_plan sp
   WHERE operation = 'TABLE ACCESS'
     AND options = 'FULL'
     AND object_owner IN ('USERNAME')
ORDER BY 1, 2;

Or the following query finds recently FULL TABLE Scan queries; means, index are missing:
SELECT *
FROM dba_hist_active_sess_history
WHERE sql_plan_options = 'FULL'
ORDER BY sample_time DESC

SELECT sql_text, parsing_schema_name, module
FROM v$sql
WHERE sql_id = 'drtn4c1accwrw'