Friday, February 9, 2018

ORA-14061: data type or length of an index partitioning column may not be changed



Error while altering the table column size
ALTER TABLE TEST_TABLE MODIFY COL_N_NUMBER VARCHAR2(13)
Error report -
SQL Error: ORA-14061: data type or length of an index partitioning column may not be changed
14061. 00000 -  "data type or length of an index partitioning column may not be changed"
*Cause:    User issued ALTER TABLE statement attempting to modify
           data type and/or length of a column used to partition some index
           defined on the table named in ALTER TABLE statement, which is
           illegal
*Action:   Avoid modifying data type and/or length of index
           partitioning column(s)

 

Solution:
 a) Check the constraint or index on the underlying column to be modified

select CONSTRAINT_NAME,TABLE_NAME,CONSTRAINT_TYPE,INDEX_NAME , status from user_constraints where TABLE_NAME='TEST_TABLE'

We found that there is a unique constraint (constraint_type='U')  on column COL_N_NUMBER. So first we need to disable it before modifying column size.

b) Get the Unique Constraint Name (constraint_type='U')  and disable it first

SQL> alter table TEST_TABLE disable constraint TESTI_TABLE_1
Table altered.



c) Modify the column size
SQL> ALTER TABLE KCA_PART_DETAILS MODIFY
COL_N_NUMBER VARCHAR2(13 CHAR)
Table altered.



d) Enable constraint back
SQL> alter table KCA_PART_DETAILS enable constraint
TESTI_TABLE_1
Table altered.


 




 

Wednesday, October 12, 2016

Import Error - ORA-31634: job already exists ORA-31664: unable to construct unique job name when defaulted



I was getting below error while i was trying to import a schema from 11gr2 database to another same version database.

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-31634: job already exists
ORA-31664: unable to construct unique job name when defaulted



To resolve this error  just drop the import tables which  got created previously while imports  run. You can find all the tables using below query and then you can get rid of them (drop table )


SELECT o.status, o.object_id, o.object_type,
          o.owner||'.'||object_name "OWNER.OBJECT",o.created,j.state
     FROM dba_objects o, dba_datapump_jobs j
    WHERE o.owner=j.owner_name AND o.object_name=j.job_name
   and o.owner='&SCHEMA' and j.state='NOT RUNNING' ORDER BY 4,2;


This will drop all the import tables for schema which you are using to do the import and import job having status as "NOT RUNNING".

Monday, July 11, 2016

Error during impdp in 11g - ORA-31693, ORA-29913, ORA-38301



While importing data (using iimpdp. DB is 11.2.0.4)  i was getting the below errror:

ORA-31693: Table data object "SCHEMA"."TABLE":"PARTITION_P1" failed to load/unload and is being skipped due to error:
ORA-29913: error in executing ODCIEXTTABLEFETCH callout
ORA-38301: can not perform DDL/DML over objects in Recycle Bin

Solution:

Go for purging the dba recyclebin. TO purge dba recyclebin your dba user need to have alter system previlege.



SQL> purge dba_recyclebin;

DBA Recyclebin purged.

Wednesday, July 8, 2015

Script to resize datafile in Oracle Database


Script to resize datafile (to reclaim free space from datafiles)

select 'alter database datafile'||' '''||file_name||''''||' resize '||round(highwater+2)||' '||'m'||';' from (
select /*+ rule */
   a.tablespace_name,
    a.file_name,
   a.bytes/1024/1024 file_size_MB,
    (b.maximum+c.blocks-1)*d.db_block_size/1024/1024 highwater
from dba_data_files a        ,
     (select file_id,max(block_id) maximum        
      from dba_extents        
      group by file_id) b,
      dba_extents c,
     (select value db_block_size        
      from v$parameter        
      where name='db_block_size') d
where a.file_id=  b.file_id
and   c.file_id  = b.file_id
and   c.block_id = b.maximum
order by a.tablespace_name,a.file_name);

Wednesday, February 4, 2015

Start and stop HADR DB2 database

Here are the steps to STOP HADR DB2 database:
-------------------------------------------------

1. Deactivate HADR on standby database:

db2 deactivate db DB2DB01
DB20000I  The DEACTIVATE DATABASE command completed successfully.

2. Stop HADR on standby database:

db2 STOP HADR ON DB DB2DB01
DB20000I  The STOP HADR ON DATABASE command completed successfully.


Similar scripts you would need to run on primary database as well:

1. Deactivate HADR on Primary  database:
db2 deactivate db DB2DB01
DB20000I  The DEACTIVATE DATABASE command completed successfully

2. Stop HADR on Primary database:
db2 stop hadr on database DB2DB01 as primary
DB20000I  The STOP HADR ON DATABASE command completed successfully.

Here are the steps to START HADR DB2 database:
-------------------------------------------------

1. Activate HADR on standby database:

 db2 activate db DB2DB01
 DB20000I  The ACTIVATE DATABASE command completed successfully.    
      
2. Start HADR on standby database:
      
 db2 START HADR ON DB DB2DB01 AS STANDBY
 DB20000I  The START HADR ON DATABASE command completed successfully.

Samilar scripts you would need to run on primary database as well:

1. Activate HADR on Primary database:

db2 activate db DB2DB01
DB20000I  The ACTIVATE DATABASE command completed successfully.

2. Start HADR on Primary database:

db2 start hadr on database DB2DB01 as primary
DB20000I  The START HADR ON DATABASE command completed successfully.

Wednesday, January 28, 2015

Slow Export job in 11gR2 / Oracle Streams AQ: enqueue blocked on low memory


Today one of our frequent export job started failing due to following error in our stage database (Oracle Database version 11.2.0.4 on OEL 6)

Oracle Streams AQ: enqueue blocked on low memory

On checking further i found that  the export job was slow due to low stream_pool_size in the database. The wait event of the job was showing as below: 

Oracle Streams AQ: enqueue blocked on low memory"

 

 I also checked the oracle support metalinkfor reference which you can refer as well.

EXPDP And IMPDP Slow Performance In 11gR2 and 12cR1 And Waits On Streams AQ: Enqueue Blocked On Low Memory (Doc ID 1596645.1)

 The issue got resolved by increasing stream_pool_size to 256MB. Export started running as expected.



Thursday, January 1, 2015

ORA-01466: unable to read data - table definition has changed


Checking at the errors, looks like some DDL changes are happening in the databases while export is running. On checking further at the error and looking at the LAST_DDL_TIME in some of the tables for which expdp failed,  the timestamp would be the exact time when export was running.

select object_name,subobject_name , object_type ,timestamp, last_ddl_time from dba_objects where object_name=;

In my case partitions (Object_Type =TABLE PARTITION)  got added in the tables when export was running which was a DDL change through some auto jobs setup for the same schema/tables for which we were taking the export.

To fix this issue don't create do any DDL changed in the database while export dump is in progress.