DBMS_Metadata – Schema extractions part II

Previously, I blogged about how DBMS_METADATA, why no doubt powerful, seemed a little less than user-friendly in its implementation and documentation.

I also believed that I would get to the bottom of how to reverse engineer some database creation scripts using very few calls to DBMS_METADATA without doing any looping around data dictionary tables.

Unfortunately, no good progress to report so far.

I was even hoping to cheat by tracing a EXPDP DATAPUMP export and seeing how that called DBMS_METADATA but it didn’t produce the goods. Yep, not surprised – thought it sounded too simple. So for now this is just adrift sinking further down my to-do list.

Alternative for dbms_output.put_line

The visitor statistics for a website/blog can be very revealing. And it’s interesting because it sort of turns the tables in that rather than a small handful of surfers finding my site as search engines react to what I’ve written based on what they are searching for, I can write an article based on those searches that the people who ended up at my site did.

Today I have had a significant number of hits for someone searching for ‘alternative for dbms_output.put_line’. Not sure why that search would lead someone to arrive at my site.

However, if you haven’t already come across it in your searches, one thing that you might find useful, whoever you are, is debug.f via Tom Kyte and here is one of his blog entries on instrumentation.

Depending on what you’re trying to achieve, another alternative is to write your messages to a table. My current method of choice is to write messages to table using an autonomous transactions that also sets session information using dbms_application_info. The sort of information is pl/sql module entry/exit points and other significant messages like session longops information (together with a maintenance proc and job to cleardown “old” messages). 

DBMS_METADATA – Schema extractions – Unintuitive

Today, I’d like to reverse engineer some vanilla schema and object creation scripts for the database that I’m working on so that I can whack them in source control.

It’s always a nice to have – although over time you often get to a position where you don’t need them / would never use them – but best practice and all that….

First up, I know that I could the imp/exp utility to extract an indexfile and then I could edit that, take out the comments, reformat it and separate it out into relevant scripts.

I could also use the more recent souped up imp/exp – DATAPUMP using impdp/expdp. This would be preferable to imp/exp because it’s better, faster and because notices regarding the de-support of imp/exp have been issued. You could do that like this:


expdp directory=exp_dir content=metadata_only full=y dumpfile=expdat.dmp logfile=export.log

But, I want readable, editable scripts. So, I could then use impdp to create the file using the sqlfile argument (seems a little weird that impdp creates the indexfile equivalent not expdp):


impdp directory=exp_dir dumpfile=expdat.dmp logfile=import.log sqlfile=import.sql

But I would like to use the DBMS_METADATA package because it seems made for the job.

I’ve read and re-read the Oracle documentation and is it just me or is it getting less and less clear?

The simplest use of the DBMS_METADATA package to extract the DDL for a single named object is pretty straightforward. For example:


SQL>set lines 120
SQL>set pages 9999
SQL>set long 10000
SQL>l
1 select dbms_metadata.get_ddl('TABLE','MY_TABLE')
2* from dual
SQL>/


DBMS_METADATA.GET_DDL('TABLE','MY_TABLE')
--------------------------------------------------------------------------------
CREATE TABLE "NOL_DBA"."MY_TABLE"
( "COL1" NUMBER
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT)
TABLESPACE "NOL_DBA_TS"

However, stray from that usage and you’re in a world of poorly documented pain.

It’s interesting how DBMS_METADATA has been implemented.
It returns XML and to get the DDL format it applies XSLT.

You can see some information about this here:


SELECT *
FROM sys.metaxsl$

Actually, I should take a step back.

As mentioned above, the reason why I’m looking at these things is that I want to figure out how to extract all the necessary DDL to recreate all the bespoke schemas in my database.

I tried using the examples in the in the documentation but they’re not doing exactly what I want, which is fair enough.
But using these as a starting point, I’m finding it tough going.
My current implementation is extracting everything including far too many SYS objects, etc. And as a result, it’s taking forever. I don’t know why it’s doing this but that’s what I’m trying to figure out.

(And I will post again later when/if that’s done).

It’s not working, but I’m tracing the session using event 10046 to find some details about what Oracle is doing behind the scenes of the DBMS_METADATA package.

And that’s where I got to SYS.METAXSL$ , although I already knew from the documentation that it was capable of returning XML and doing transformations, etc.

The other useful sys object that I’m going to be looking at is SYS.METANAMETRANS$
Although I’m sure I will be having a look at the rest of them:


select * from dba_tables
where owner = 'SYS'
and table_name like 'META%';


TABLE_NAME
------------------------------
METAXSLPARAM$
METAXSL$
METAVIEW$
METASTYLESHEET
METASCRIPTFILTER$
METASCRIPT$
METAPATHMAP$
METANAMETRANS$
METAFILTER$

What’s interesting about SYS.METANAMETRANS$ , is that it is tying in nicely with the log that I was looking at from EXPDP. EXPDP makes use of DBMS_METADATA to do it’s business. The log of the above EXPDP command reveals entries like this:


Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options
Starting "SYS"."SYS_EXPORT_FULL_02": /******** AS SYSDBA full=y content=metadata_only directory=exp_dir
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/ROLE
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
Processing object type DATABASE_EXPORT/RESOURCE_COST
Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/SEQUENCE
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
Processing object type DATABASE_EXPORT/DIRECTORY/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/CONTEXT
Processing object type DATABASE_EXPORT/SCHEMA/PUBLIC_SYNONYM/SYNONYM
Processing object type DATABASE_EXPORT/SCHEMA/SYNONYM
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/INC_TYPE
Processing object type DATABASE_EXPORT/SCHEMA/TYPE/TYPE_SPEC
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PROCOBJ
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
......

So that was by starting point with DBMS_METADATA, fiddling with DBMS_METADATA.OPEN, DBMS_METADATA.SET_FILTER, DBMS_METADATA.ADD_TRANSFORM, DBMS_METADATA.FETCH_CLOB and DBMS_METADATA.CLOSE.

And the interesting thing is that these log entries above, tally with the entries in SYS.METANAMETRANS$.

So, I think I’m making progress, albeit slow (almost as slow as DBMS_METADATA itself – how come DATAPUMP gets it to move so quickly? Is it parallelism and/or queues?)

When (not if, please don’t let it be if) I’ve figured this out, I’ll share it. I have looked hard at the documentation and done numerous Google searches but to no obvious avail.

It looks like something that has been over-engineered whilst losing usability due to ease of use or lack thereof.

DBMS_REDEFINITION – ORA-42016, ORA-04080 & ORA-42011

I have table with nearly five million rows in it. One of the columns is a LONG which I need to convert to a CLOB a) in the long term and b) for a search prototype using Oracle Text and a multi-column datastore.

When eventually done on production, this will have to be done with minimal impact so DBMS_REDEFINITION seems to be the best options.

I did initially try ALTER TABLE <tablename> MODIFY <columnname> CLOB” but it just took far too long. I made the mistake of not having enough free space in the tablespace and it would take nine or ten hours to come back and say it was unable to create the necessary temp extents. Presumably, it’s creating new temp extents for the clob column which on completion it will convert to permanent. Unfortunately, I was just letting this run overnight so I wasn’t proactively monitoring the tablespace.

Having said that, on the second time I ran the statement I had let set one of the datafiles in the tablespace to autoextend on. However, it only extended out to just over 30 Gig – operating system limit presumably – and then the same error occurred.

 So, it made sense to do it with DBMS_REDEFINITION as that is what we’d need to do in production and I’d not used it before, so good opportunity.

The steps are well documented out there, but broadly:

1. Manually create redefinition table with datatypes and column names required. In this case, the definition of the new table was exactly in the same as the existing table with the exception of the CLOB datatype of the previously LONG column.

2. Grant the necessary grants to the table owner, namely: CREATE ANY TABLE, ALTER ANY TABLE, DROP ANY TABLE, LOCK ANY TABLE, SELECT ANY TABLE, CREATE ANY TRIGGER, CREATE ANY INDEX and EXECUTE ON DBMS_REDEFINITION.

3. Check that the target table can be redefined using DBMS_REDEFINITION.CAN_REDEF_TABLE (probably more efficient right at the start before step 1).

4. Use DBMS_REDEFINITION.START_REDEF_TABLE to define how the old table maps to the new table. A couple of points to note – firstly (and probably obviously but it took a couple of failed attempts for the penny to drop), you need to specify all the columns. So, the parameter is a comma separated string like this:

col_mapping => 'col1 col1, col2 col2, col3 col3 ...... TO_LOB(colN) colN'

If you don’t get this right then you get:

ORA-42016: shape of interim table does not match specified column mapping

Secondly, there is an options_flag parameter which indicates how the mapping mechanism is done. There seem to be two methods – by primary key or by rowid with the former seemingly preferred over the latter where possible.

5. Use DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS to copy all the TRIGGERS, INDEXES, CONSTRAINTS and PRIVILEGES.

6. Call DBMS_REDEFINITION.SYNC_INTERIM_TABLE to sync up any DML activity that has happened since the start.

7. Call DBMS_REDEFINITION.FINISH_REDEF_TABLE to copy the new table to the old.

 The only minor issues that I had were at the SYNC_INTERIM_TABLE step and, following on from my erroneous or incomplete actions there, at the FINISH_REDEF_TABLE step.

When, I got to the SYNC_INTERIM_TABLE step, an error was raised:


ERROR at line 1:
ORA-42009: error occurred while synchronizing the redefinition
ORA-12008: error in materialized view refresh path
ORA-04098: trigger is invalid and failed
re-validation
ORA-06512: at "SYS.DBMS_REDEFINITION", line 117
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1448
ORA-06512: at line 55

Now, I got this because I was working with a cut down version of the database with some objects missing. As a result, some of the triggers on the existing table were invalid and therefore some of the newly created triggers on the redef table were invalid. So, I thought I would just drop these triggers from the redef table.

This was fine and the SYNC_INTERIM_TABLE step passed ok.
However, at the FINISH_REDEF_TABLE step, another error:

ERROR at line 1:
ORA-04080: trigger 'TMP$$_NEWSDELETE0' does not exist
ORA-06512: at "SYS.DBMS_REDEFINITION", line 76
ORA-06512: at "SYS.DBMS_REDEFINITION", line 1376
ORA-06512: at line 2

It didn’t exist because I had dropped it but obviously something somewhere needed it.
I tried using DBMS_REDEFINITION.UNREGISTER_DEPENDENT_OBJECT but although this call worked, the error above persisted.

I could see in the DBA_REDEFINITION_OBJECTS that these non-existent objects were still expected.
But I couldn’t find any info on Metalink or Google about this.

So, I recreated the triggers with the expected temporary names and validated them by having a NULL; trigger body.

Bish, bash, bosh, FINISH_REDEF_TABLE done and dusted, process finished and all that was left was to manually drop the redef table created at step 1.

In summary, it’s a pretty neat bit of functionality, particularly for a high availability system and I know we will use this a lot once and when we migrate from 8i to 10g.

Follow

Get every new post delivered to your Inbox.

Join 68 other followers