• IBM Consulting

    DBA Consulting can help you with IBM BI and Web related work. Also IBM Linux is our portfolio.

  • Oracle Consulting

    For Oracle related consulting and Database work and support and Migration call DBA Consulting.

  • Novell/RedHat Consulting

    For all Novell Suse Linux and SAP on Suse Linux questions releated to OS and BI solutions. And offcourse also for the great RedHat products like RedHat Enterprise Server and JBoss middelware and BI on RedHat.

  • Microsoft Consulting

    For Microsoft Server 2012 onwards, Microsoft Client Windows 7 and higher, Microsoft Cloud Services (Azure,Office 365, etc.) related consulting services.

  • Citrix Consulting

    Citrix VDI in a box, Desktop Vertualizations and Citrix Netscaler security.

  • Web Development

    Web Development (Static Websites, CMS Websites (Drupal 7/8, WordPress, Joomla, Responsive Websites and Adaptive Websites).

30 January 2010

Hot Blocks

Last Friday I did have a interview for a Senior DBA job and was unable to answer a question about “HOT BLOCKS” to my shame. However the master from Oracle Tom Kyte did refer to a link on his website Ask Tom, with a Note reference:

Note ID: 163424.1

Solution

Possible hot blocks in the buffer cache normally can be identified by a high or
rapid increasing wait count on the CACHE BUFFERS CHAINS latch.

This latch is acquired when searching for data blocks cached in the buffer cache.
Since the Buffer cache is implemented as a sum of chains of blocks, each of those
chains is protected by a child of this latch when needs to be scanned. Contention
in this latch can be caused by very heavy access to a single block. This can
require the application to be reviewed.

@As stated in @Note 42152.1 LATCH:CACHE_BUFFERS_CHAINS:
To solve a hot block, the application maybe need to be reviewed.

By examining the waits on this latch, information about the segment and the
specific block can be obtained using the following queries.

First determine which latch id(ADDR) are interesting by examining the number of
sleeps for this latch. The higher the sleep count, the more interesting the
latch id(ADDR) is:
SQL> select CHILD#  "cCHILD"
     ,      ADDR    "sADDR"
     ,      GETS    "sGETS"
     ,      MISSES  "sMISSES"
     ,      SLEEPS  "sSLEEPS" 
     from v$latch_children 
     where name = 'cache buffers chains'
     order by 5, 1, 2, 3;

Run the above query a few times to to establish the id(ADDR) that has the most
consistent amount of sleeps. Once the id(ADDR) with the highest sleep count is found
then this latch address can be used to get more details about the blocks
currently in the buffer cache protected by this latch.
The query below should be run just after determining the ADDR with
the highest sleep count.
SQL> column segment_name format a35
     select /*+ RULE */
       e.owner ||'.'|| e.segment_name  segment_name,
       e.extent_id  extent#,
       x.dbablk - e.block_id + 1  block#,
       x.tch,
       l.child#
     from
       sys.v$latch_children  l,
       sys.x$bh  x,
       sys.dba_extents  e
     where
       x.hladdr  = 'ADDR' and
       e.file_id = x.file# and
       x.hladdr = l.addr and
       x.dbablk between e.block_id and e.block_id + e.blocks -1
     order by x.tch desc ;

Example of the output :
SEGMENT_NAME                     EXTENT#      BLOCK#       TCH    CHILD#
-------------------------------- ------------ ------------ ------ ----------
SCOTT.EMP_PK                       5            474          17     7,668
SCOTT.EMP                          1            449           2     7,668

Depending on the TCH column (The number of times the block is hit by a SQL
statement), you can identify a hotblock. The higher the value of the TCH column,
the more frequent the block is accessed by SQL statements.

In order to reduce contention for this object the following mechanisms can be put in place:
1) Examine the application to see if the execution of certain DML and SELECT statements
    can be reorganized to eliminate contention on the object.

2) Decrease the buffer cache -although this may only help in a small amount of cases.

3) DBWR throughput may have a factor in this as well.
    If using multiple DBWR's then increase the number of DBWR's.

4) Increase the PCTFREE for the table storage parameters via ALTER TABLE
    or rebuild. This will result in less rows per block.

5) Consider implementing reverse key indexes
   (if range scans aren't commonly used against the segment)




28 January 2010

Sun + Oracle


What am I missing in the Cloud??

Maybe missing a business opportunity in the sense of selling Virtual machines to Oracle Partners and Customers with a pre-specified amount of CPU and RAM and Sun Storage 7000 storage space! You can even offer pre-installed Software bundles and the option to install yourself as customer, like some internet service providers are doing right know already.

20 January 2010

ASM init.ora Parameters




PARAMETER
DESCRIPTION
DB_CREATE_FILE_DEST
This parameter allows the specification of the ASM disk groups using a format similar to:
+diskgroup1, where diskgroup1 is an existing ASM disk group mounted in the nodes ASM instance.
DB_RECOVERY_FILE_DEST
This parameter allows the specification of ASM disk groups for recovery files location using a format similar to: +dgroup4, where dgroup4 is an existing ASM disk group mounted in the nodes ASM instance.
DB_RECOVERY_FILE_DEST_SIZE
This parameter specifies the size area to reserve in the db_recovery_file_dest specified disk group for recovery files. An example setting would be: 5G where 5 is the integer size and “G” in this setting tells Oracle this is Gigabytes. The use of K and M are also valid for kilobytes and megabytes.
DB_CREATE_ONLINE-LOG_DEST_N
This parameter (where “n” is replaced with 1-5), allows you to specify the disk group where redo logs will be created. For example specifying: +dgroup8, places the log files in the ASM disk group dgroup8.




PARAMETER
DESCRIPTION
INSTANCE_TYPE
In order to set the ASM parameters, this parameter must be set to instance_type = ASM. Note: This is the only required parameter. All other parameters take suitable defaults for most environments.
DB_UNIQUE_NAME
This parameters specifies the unique name for this group of ASM instances within the cluster or on a node. This parameter defaults to +ASM and only needs to be modified if you are trying to run multiple ASM instances on the same node.
ASM_POWER_LIMIT
This parameter determines the maximum power on an ASM instance for disk rebalancing operations. The default is 1, which is also the most intrusive to database operations, a higher setting makes the rebalancing operations less intrusive.
ASM_DISKSTRING
The purpose of this parameter is to limit the set of disks that Automatic Storage Management considers for discovery. If not specified the entire system is searched for compatible volumes which can lengthen the startup time. The Parameter default value is NULL. (This default causes ASM to find all of the disks in a platform-specific location to which it has read/write access).



14 January 2010

Below a link from the professionals about AJAX ADF Rich faces technology

AJAX (not the soccor team) the technique

One can tell all kind of interesting stories about Java en Oracle SOA, but is it not better to show what it can do?

Here are some exaples from the Oracle JDeveloper 11g sample webshop application!




13 January 2010

Web 3.0 are we gifted enough?

This is borrowed just give more people the ability to learn about Web 3.0:


Web 3.0 Debates over Definition
Since the origins of the concept of Web 3.0, the debate continues goes on about exactly what the term Web 3.0 means, and what a suitable definition might be. As emerging the new technology, a new definition emerged:
Transforming the Web into a database
Transforming the Web into database is the beginning step towards transforming definition of Web 3.0 when the technology of ‘Data Web’ emerged as structured data records that can be published to the Web in reusable and remotely queryable formats, such as XML, RDF and microformats. The Data Web is the initial step in the way of full Semantic web that enables a new level of data integration and application interoperability, which makes the data openly accessible and linkable as Web pages. To make available structured data using RDF is primarily focused in Data Web phase. The full Semantic Web stage will so expand the scope that both structured and semi-structured or unstructured content will be widely available in RDF and OWL semantic formats.
An evolutionary path to artificial intelligence
Web 3.0 has also been used to describe the trend of artificial intelligence, which is being popular in the web like a quasi-human fashion. Some cynic believes that it is an unobtainable vision. However, this is being used new technologies on mass level that yields amazing information like making predictions of hit songs from mining information on college music Web sites. There is also debate on the driving force behind Web 3.0.  Will it be the intelligent systems, or whether intelligence will emerge in a more organic fashion and how people interact with it?
The realization of the Semantic Web and Service Oriented Architecture
Another debate originates over the artificial intelligence direction in which Web 3.0 can be extent to Semantic web concept. Academic research is going on to develop such reasoning software

that must be based on description logic and intelligent agents. These sorts of applications can perform logical reasoning operations through using sets of rules expressing logical relationships between concepts and data on the Web.
But some critics are disagree on the viewpoint, which describes that Semantic Web would be the core of the 3rd generation of the Internet and suggests a formula to summarize Web 3.0.
Web 3.0 has also been associated to a possible hub of SOA (Service Oriented Architecture) and Semantic web.
Evolution towards 3D
The evolution of 3D technology is also being connected to Web 3.0 as Web 3.0 may be used on massive scale due to its characteristics. In this process Web 3.0 would transform into a series of 3D spaces, taking the concept realized by Second Life expansion. This could open up new ways to connect and collaborate using 3D shared spaces.
Proposed Expanded Definitions of Web 3.0
 Nova Spivack has proposed the expanded definition of Web 3.0 that indulge in itself the collection of various foremost harmonizing technology developments that are growing to a new level of maturity simultaneously includes:
  • Ubiquitous Connectivity, broadband adoption, mobile Internet access and mobile devices
  • Network computing, software-as-a-service business models, Web services

interoperability, distributed computing, grid computing and cloud computing
  • Open technologies, Open APIs and protocols, open data formats, open-source software platforms and open data (e.g. Creative Commons, Open Data License)
  • Open identity, OpenID, open reputation, roaming portable identity and personal data
  • The intelligent web, Semantic web technologies such as RDF, OWL, SWRL, SPARQL, Semantic application platforms, and statement-based datastores
  • Distributed databases, the "World Wide Database" (enabled by Semantic Web technologies)
  • Intelligent applications, natural language processing, machine learning, machine reasoning, and autonomous agents

SOA link and talk about Web 3.0

Very useful link about SOA:

http://www.roseindia.net/Technology-revolution/web3.0/web-3-difinition.shtml

Thanks and appreciate what you say about Web 3.0

Very useful link to TechRepublic Consulting Tips.

In order to rpevent unnecessary crap on the job, these tips from tech republic are useful:

http://blogs.techrepublic.com.com/10things/

Other tip:

Avoid Jehovah witnesses!!!

Total Recall with Flashback Archive

The Flashback Data Archive is stored in a tablespace and contains transactional changes to every record in a table for the duration of the record's lifetime. The archived data can be retained for a much longer duration than the retention period offered by an undo tablespace.

Using Flashback Data Archive to Access Historical Data you want to be able to retrieve the inventory of all items at the beginning of the year from the table inventory, and to be able to retrieve the stock price for each symbol in your portfolio at the close of business on any specified day of the year from the table stock_data.

Create a default Flashback Data Archive named fla1 that uses up to 10 G of tablespace tbs1, whose data are retained for five years (you must be logged on as SYSDBA):

CREATE FLASHBACK ARCHIVE DEFAULT fla1 TABLESPACE tbs1
QUOTA 10G RETENTION 5 YEAR;

Enable Flashback Data Archive for the tables inventory and stock_data, and store the historical data in the default Flashback Data Archive:

ALTER TABLE inventory FLASHBACK ARCHIVE;
ALTER TABLE stock_data FLASHBACK ARCHIVE;

To retrieve the inventory of all items at the beginning of the year 2007, use this query:

SELECT product_number, product_name, count FROM inventory AS OF
TIMESTAMP TO_TIMESTAMP ('2007-01-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS');

To retrieve the stock price for each symbol in your portfolio at the close of business on July 23, 2007, use this query:

SELECT symbol, stock_price FROM stock_data AS OF
TIMESTAMP TO_TIMESTAMP ('2007-07-23 16:00:00', 'YYYY-MM-DD HH24:MI:SS')
WHERE symbol IN my_portfolio;

Flashback from commandline

Before one can Flashback a table or row, one has to determine how far you want to go back in time. The most obvious choice is one day because a backup should be made each evening.A parameters has to be set called UNDO_RETENTION (to specify one day one has to set 1440 minutes). The parameter can be set within the DB Console or with SQLPLUS with the command specified below:


SQL> ALTER SYSTEM SET UNDO_RETENTION = 1440;

In order to check how far the database can be brought back in time, one has to set  the parameter DB_FLASHBACK_RETENTION_TARGET. In order to check if the setting was successful one can perform the query below.

SQL> show parameter db_flashback_retention_target

NAME                                                          TYPE                                  VALUE
------------------------------------                         ---------------------------------   ------------------------------
db_flashback_retention_target                integer                              1440



SQL> alter system set db_flashback_retention_target=2880 scope=both;
System altered.

SQL> select estimated_flashback_size,flashback_size,oldest_flashback_scn,oldest_flashback_time from v$flashback_database_log;

ESTIMATED_FLASHBACK_SIZE  FLASHBACK_SIZE  OLDEST_FLASHBACK_SCN  OLDEST_F
------------------------                            --------------                  --------------------                           --------
0                                                       155648000                 6048406                                     24/07/05


What is the  actual SCN of the database?

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
6052503



Turning on Flashback:

1) SQL> select flashback_on from v$database;

    FLASHBACK
    ---------
    NO

2) SQL> shutdown immediate;
    Database closed.
    Database dismounted.
    ORACLE instance shut down.

3) SQL> startup mount;
    ORACLE instance started.

    Total System Global Area 209715200 bytes
    Fixed Size 788524 bytes
    Variable Size 162887636 bytes
    Database Buffers 41943040 bytes
    Redo Buffers 4096000 bytes
    Database mounted.


4) SQL> alter database flashback on;
    Database altered.

    SQL> select flashback_on from v$database;

    FLASHBACK
    ---------
    YES

5) SQL> alter database open;
    Database altered.






Querying what the recyclke bin contains can be doen with the query below:

SQL> SET PAGESIZE 120
SQL> SET LINESIZE 100
SQL> TTITLE 'Current Recycle Bin Contents'
SQL> COL object_name     FORMAT A30         HEADING 'Object Name'
SQL> COL type                   FORMAT A8           HEADING 'Object|Type'
SQL> COL original_name   FORMAT A20         HEADING 'Original Name'
SQL> COL droptime            FORMAT A20         HEADING 'Dropped On'
SQL> COL dropscn             FORMAT 9999999  HEADING 'Drop|SCN'
SQL> SELECT
  2       object_name
  3      ,type
  4      ,original_name
  5      ,droptime
  6    FROM dba_recyclebin
  7   WHERE owner = 'DSSOWNER'
  8  ;

Wo Jan 21                                                                                pagina    1
                                    Current Recycle Bin Contents

                                                                 Object
Object Name                                            Type         Original Name                         Dropped On
------------------------------                              --------      --------------------                         --------------------
BIN$SPDLZLO+TCmczdoVcAFYlw==$0 TABLE    FLASH_BACK_DROPPEDTA  2009-01-21:13:59:32






The general command for the flashback table statement:

FLASHBACK TABLE
   [ schema. ]table
     [, [ schema. ]table ]...
TO { { SCN | TIMESTAMP } expr
        [ { ENABLE | DISABLE } TRIGGERS ]
      | BEFORE DROP [ RENAME TO table ]
      } ;
In order to select from the recycle use following statements:

·         SELECT * FROM RECYCLEBIN;
·         SELECT * FROM USER_RECYCLEBIN;

Flashback of the  table is done with the statements:

  • FLASHBACK TABLE TO BEFORE DROP;
  • FLASHBACK TABLE TO BEFORE DROP RENAME TO

If oner has multiple dropped versions of a table and one wants to restore the oldest version, 
One has to query the content of the recycle bin to determine the object_name and droptime of the table in question and the table can be restored with the  FLASHBACK TABLE statement.
 
SELECT object_name, droptime FROM user_recyclebin 
   WHERE original_name = 'employees';
 
OBJECT_NAME                    DROPTIME
------------------------------ -------------------
RB$$45703$TABLE$0              2003-06-03:15:26:39
RB$$45704$TABLE$0              2003-06-12:12:27:27
RB$$45705$TABLE$0              2003-07-08:09:28:01




In order to restore a tablet o a certain point in time one has to set the table property ENABLE ROW MOVEMENT to  YES. This can be done in two ways. With an SQL statement or with the DB Console.

With an SQL statement, as follows:

SQL> ALTER TABLE tablename ENABLE ROW MOVEMENT;

And the flashback of the table to one minute back in time, is done with the command:
 
SQL> FLASHBACK TABLE employees_demo
  TO TIMESTAMP (SYSTIMESTAMP - INTERVAL '1' minute);

12 January 2010

Flashback Table in time

Rewinding a Table Using Oracle Flashback Table

Oracle Flashback Table enables you to rewind one or more tables back to their contents at a previous time without affecting other database objects. Thus, you can recover from logical data corruptions such as table rows added or deleted accidentally. Unlike point-in-time recovery, the database remains available during the flashback operation.
For this example, you use Flashback Table on the employees table in the hr schema. Assume that an erroneous update shortly after October 23, 2005 at 15:30:00 has changed the lastname column for all employees to an empty string, and you must return the original lastname values to the table.

Enabling Row Movement on a Table

Before you can use Flashback Table, you must ensure that row movement is enabled on the table to be flashed back, or returned to a previous state. Row movement indicates that rowids will change after the flashback occurs. This restriction exists because if rowids before the flashback were stored by an application, then there is no guarantee that the rowids will correspond to the same rows after the flashback.
To enable row movement on a table:
1.        On the Database Home page, click Schema to display the Schema subpage.
2.        Click Tables in the Database Objects section.
The Tables page appears.
3.        To find the target table for Flashback Table, do the following:
1.        Enter the schema name in the Schema field and, optionally, the table name in the Object Name field.
2.        Click Go to search for the table.
When you search for tables, for example, in the hr schema, you may have to page through the search results to find your table.
4.        Select the table from the list of tables and click Edit.
In this scenario, select employees.
The Edit Table: table_name page appears.
5.        Click Options to go to the Options subpage.
6.        Complete the following steps:
1.        Set Enable Row Movement to Yes.
2.        Click Apply to update the options for the table.
An update message appears.
7.        Complete the following steps:
1.        Click Tables at the top of the page to return to the search results.
2.        Enable row movement on more tables by repeating Step 1 through Step 6 for each table.
For this example, you should also enable row movement on the tables hr.jobs and hr.departments.

Performing a Flashback Table Operation

In this example, you rewind the hr.employees table and its dependent tables to a previous point in time.
To perform the Flashback Table operation:
1.        On the Database Home page, click Availability to display the Availability subpage.
2.        Select Perform Recovery from the Manage section.
The Perform Recovery page appears.
3.        In the User Directed Recovery section, select Tables from the Recovery Scope list.
The page reloads with options appropriate for object-level recovery of tables.
4.        For Operation Type, choose Flashback Existing Tables, and click Recover.
The Perform Object Level Recovery: Point-in-time page appears.
5.        Choose the target time for the Flashback Table operation, and click Next.
Note:
If you do not know the time at which the unwanted changes occurred, then you can investigate the history of transactions affecting this table by selecting Evaluate row changes and transactions to decide upon a point in time. Oracle Flashback Version Query enables you to review all recent changes to the target table. Use of this feature is beyond the scope of this documentation.
For this example, assume that rows were accidentally inserted 5 minutes ago. Select Flashback to a timestamp and enter a time 5 minutes before now.
The Perform Object Level Recovery: Flashback Tables page appears.
6.        Enter table names in the Tables to Flashback text box and then click Next.
You can enter multiple table names to flash back several tables to the same time. You can also click Add Tables and search for more tables. For this example, enter the text hr.employees in the Tables to Flashback text box.
If your table has other dependent tables, then the Dependency Options page appears. This page asks how dependencies should be handled.
7.        Do one of the following, and then click Next:
o        Select Cascade to flash back any dependent tables.
o        Select Restrict to flash back only the target table.
o        Select Customize to choose which dependent tables to flash back and which to leave as they are.
You can click Show Dependencies to see which tables will be affected.
Note:
Row movement must be enabled on all affected tables, not just the initial target tables.
In this example, the hr.employees table has dependent tables hr.jobs and hr.departments, so select Cascade and then click Next.
The Perform Object Level Recovery: Review page appears.
8.        Click Submit.
When the operation is completed, a Confirmation page displays the results. Click OK.

How to use RNAM succesfully on the commandline

First create a user in the database and giver the necessary grants:

CREATE USER rman IDENTIFIED BY rman
TEMPORARY TABLESPACE temp
DEFAULT TABLESPACE rman
QUOTA UNLIMITED ON rman;

grant dba,connect,resource, sysdba to rman;

grant RECOVERY_CATALOG_OWNER to rman;


Then logon to RMAN:


Microsoft Windows [Version 5.2.3790]
(C) Copyright 1985-2003 Microsoft Corp.

H:\>E:

E:\oracle\ora92\bin>rman catalog=rman/rman@rmandb

Recovery Manager: versie 9.2.0.1.0 - Production

Copyright (c) 1995, 2002, Oracle Corporation.  All rights reserved.

Verbonden met herstelcatalogusdatabase.



Then create a catalog:

RMAN> create catalog tablespace "RMAN";

Herstelcatalogus is aangemaakt.

RMAN> exit


Recovery Manager is voltooid.

E:\oracle\ora92\bin>


Then Register Database:

E:\rman catalog=rman/rman@rmandb target=sys/visor@boprd

RMAN> register database;

RMAN> configure retention policy to recovery window of 7 days;
RMAN> configure default device type to disk;
RMAN> configure controlfile autobackup on;
RMAN> configure channel device type disk format 'D:\Backup\Backup_%d_%T_%U';
RMAN> configure controlfile autobackup format for device type disk to 'D:\Backup\B_%d_%F';


NOTE: 'F:\' is a backup location where AUTOBACKUP controlfile or spfile created and "%d_%T_%U" is format.


'Format_String' clause

The formatting of this information varies by platform.

Syntax:

  %c  The copy number of the backup piece within a set of duplexed
      backup pieces. If you did not duplex a backup, then this variable
      is 1 for backup sets and 0 for proxy copies.
      If one of these commands is enabled, then the variable shows the
      copy number. The maximum value for %c is 256.

  %d  The name of the database.

  %D  The current day of the month (in format DD)

  %F  Combination of DBID, day, month, year, and sequence into a unique
      and repeatable generated name.

  %M  The month (format MM)

  %n  The name of the database, padded on the right with x characters
      to a total length of eight characters. (AKA: Porn star alias name)
      For example, if the scott is the database name, %n= scottxxx.

  %p  The piece number within the backup set. This value starts at 1
      for each backup set and is incremented by 1 as each backup piece
      is created. Note: If you specify PROXY, then the %p variable must
      be included in the FORMAT string either explicitly or implicitly within %U.

  %s  The backup set number. This number is a counter in the control file that
      is incremented for each backup set. The counter value starts at 1 and is
      unique for the lifetime of the control file. If you restore a backup
      control file, then duplicate values can result.
      Also, CREATE CONTROLFILE initializes the counter back to 1.

  %t  The backup set time stamp, which is a 4-byte value derived as the
      number of seconds elapsed since a fixed reference time.
      The combination of %s and %t can be used to form a unique name for
      the backup set.

  %T  The year, month, and day (YYYYMMDD)

  %u  An 8-character name constituted by compressed representations of
      the backup set number and the time the backup set was created.

  %U  A convenient shorthand for %u_%p_%c that guarantees uniqueness in
      generated backup filenames.
      If you do not specify a format, RMAN uses %U by default.

  %Y  The year (YYYY)

  %%  Specifies the '%' character. e.g. %%Y translates to %Y.


"We don't want to go back to tomorrow, we want to go forward" - Dan Quale


Then Create the backup as follows:

Backup maken met volgende commando:

run {
  backup database plus archivelog;
  delete noprompt obsolete;
    }



Obsolete command below:
CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'D:\ORACLE\ora92\database\SNCFBOPRD.ORA';
RMAN> CONFIGURE SNAPSHOT CONTROLFILE NAME TO 'D:\ORACLE\ora92\database\SNCFBOPRD.ORA';

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of configure command at 08/05/2008 12:37:18
RMAN-03014: implicit resync of recovery catalog failed
RMAN-03009: failure of full resync command on default channel at 08/05/2008 12:3
7:18
ORA-01580: Fout bij het aanmaken van backup van control-bestand E:\ORACLE\ORA92\SNCFBOPRD.ORA.
ORA-27040: skgfrcre: fout bij aanmaken, bestand kan niet worden aangemaakt.
OSD-04002: Kan bestand niet openen.
O/S-Error: (OS 3) The system cannot find the path specified.
Solution for this problem, login on to target DB in SQLPLUS on the target Database with sys as sydba and then execute command shown below for path of the target server:


SQL> EXECUTE SYS.DBMS_BACKUP_RESTORE.CFILESETSNAPSHOTNAME('D:\oracle\ora92\database\SNCFBOPRD.ORA');

EXECUTE SYS.DBMS_BACKUP_RESTORE.CFILESETSNAPSHOTNAME('D:\ORACLE\ora92\DATABASE\SNCFBOPRD.ORA');


Command for backup:


run {
allocate channel ch1 type disk format 'E:\backup\full_backup_%d_%s_%t';
backup full tag FULL_DB
( database include current controlfile)
( archivelog from time 'sysdate-1' all delete input);
release channel ch1;
}


Commando for restore:

run {
shutdown immediate;
startup mount;
set until time '2004-07-14 10:00:00';
restore database;
recover database;
alter database open resetlogs;
}




oracle rman script to make a cold backup of 9i database
-------------------------------------------------------

# 9i.rcv
# This is an AUTOMATICALLY GENERATED RMAN script for Oracle 9 to
# cold backup the 9i database including controlfile, datafiles and
# archive redo logs to disk (no Legato tape option is available yet).
# No recovery catalog is currently used, due to current semaphore
# limitations on some machines.

shutdown immediate;
startup mount;
run {

allocate channel c1 type disk;
backup
        tag Full_9i_daily
        format '/oracle/9i/full/9i/Full_%d_%s_%p_%t'
        (database);
release channel c1;

configure controlfile autobackup format for device type disk to '/oracle/9i/full/9i/Auto_Ctrl_%F';
configure controlfile autobackup on;

allocate channel c1 type disk;
backup
        format '/oracle/9i/full/9i/Archive_%d_%s_%p_%t'
        (archivelog all );
release channel c1;

configure controlfile autobackup off;

}
startup;


alternative script for cold backup with rman:


replace script 'rman_daily_backup' {
shutdown immediate;
startup mount;
allocate channel dev1 type disk format '/images/rman_%s.bak';
backup database include current controlfile;
alter database open;
release channel dev1;
}
   

03 January 2010

I did receive an invitation to renew and upgrade my Oracle Partner level to GOLD!

On December 14th I did receive the email to renew and upgrade my Oracle Partner Level to GOLD! This is an honor and an achievement of which I am proud. This means that I will have access to more resources and will be able to position myself better in the market. It also means that my plans to become Oracle Certified Master will have to speed up, so I can sell my skill set even better. For this my plan is to buy a Computer that will be dedicated Linux and Oracle Virtual Machine. That means that I will have to be able to work with Dom0 and DomU. Aint that a thing!

I look forward to a great Oracle Partner Business Practive for the Year 2010 and Wish all at Oracle and Offcourse all other IT people a Happy New Year for 2010 and a very good business result.





Albert