Friday, November 17, 2006

Tip : Oracle Home

When you have several oracle_home, here is a way to find which is the oracle home for a particular instance:

1) get the pid of a background process:
ps -edf | grep [o]ra_smon_ORAPRD01
oracle 307694 1 0 Jan 24 - 1:43 ora_smon_ORAPRD01

2) get its current working directory from /proc:
ls -l /proc/307694/cwd
lr-x------ 2 oracle dba 0 Feb 28 16:00 /proc/307694/cwd -> /app/oracle/product/9.2.0/dbs/

Here oracle_home is /app/oracle/product/9.2.0

Tip : dbms_java

By executing the following 2 statements in your session :

SET SERVEROUTPUT ON SIZE 1000000;
CALL DBMS_JAVA.SET_OUTPUT (1000000);

All the output of the System.out.println command in your stored procedures will be shown in your serveroutput. This gives you simple and fast way to debug your procedure like any other pl/sql procedure. It's jus like you would use the dbms_output.put_line proc.

Saturday, May 13, 2006

Tip : 10g Problem

When you start to work with 10g, you will notice some "new" behavior that you didn't expect.
After install on a unix environment you aren't able to run sqlplus, tnsping from sthat server if you aren't the user or if you don't belong to DBA group.

It looks like that Oracle is not able to get the securirity right.
Their solution is close it for everybody (as in the big, bad world) and your save.
Now you, as a DBA, has to open it for the normal (non DBA) users and now YOU are responsable!
That's an easy solution of Mr. Oracle.

You need to run $ORACLE_HOME/install/changePerm.sh to lift the restrictions.
Note some directories are kep to restrictive, you need to execute in the ORACLE_HOME:
$ chmod O+rx lib lib2 ldap ldap/admin srvm/lib32

(Or better: Checkout Metalink note 363721.1 and download patch 5087548.)

Another effect is that the CONNECT role is changed (is more restrictive in 10g).
Restoring the role CONNECT of 10gR2 as it was in 9i:

SQL> @$ORACLE_HOME/rdbms/admin/rstrconn.sql


Monday, February 27, 2006

Tip : Easy Connect String

EZCONNECT - Easy Connect String
You can avoid having to use a TNSNAMES.ORA file by specifying the full connect string at the command prompt. The hostname is the only mandatory piece.

sqlplus username/password@ [//] hostname [:port] [/service_name]

Eg. sqlplus scott/tiger@10gserver:1521/ORCL


Ex :sqlplus scott/tiger@:1521/

Set up EZCONNECT in the sqlnet.ora file as follows:

NAMES.DIRECTORY_PATH = (TNSNAMES, EZCONNECT)

This gives us easy and secure way of connecting to databases from client machine without leaving the DB information in TNSNAMES.ORA

This information is taken from http://www.dba-village.com/

Tuesday, February 21, 2006

Oracle Firefox Search Plugins

Mozilla Firefox

General Oracle Search Plugins

ORACLE-BASE.com Search Plugin Install oracle-base-plugin - Searches the ORACLE-BASE.com site.
OTN Search Plugin Install otn-plugin - Searches the Oracle Technology Network site.
Oracle Metalink Search Plugin Install oracle-metalink-plugin - Searches the Oracle Metalink site.

Oracle Error Search Plugins

Oracle 8.1.7 Errors Search Plugin Install oracle817-errors-plugin - Searches the Oracle 8.1.7 error messages manual.
Oracle 9.0.1 Errors Search Plugin Install oracle901-errors-plugin - Searches the Oracle 9.0.1 error messages manual.
Oracle 9.2.0 Errors Search Plugin Install oracle920-errors-plugin - Searches the Oracle 9.2.0 error messages manual.
Oracle 10.1.0 Errors Search Plugin Install oracle101-errors-plugin - Searches the Oracle 10.1.0 error messages manual.
Oracle 10.2.0 Errors Search Plugin Install oracle102-errors-plugin - Searches the Oracle 10.2.0 error messages manual.

Oracle Documentation Search Plugins

Oracle 8.1.7 Documentation Search Plugin Install oracle817-docs-plugin - Searches the Oracle 8.1.7 manuals.
Oracle 9.0.1 Documentation Search Plugin Install oracle901-docs-plugin - Searches the Oracle 9.0.1 manuals.
Oracle 9.2.0 Documentation Search Plugin Install oracle920-docs-plugin - Searches the Oracle 9.2.0 manuals.
Oracle 10.1.0 Documentation Search Plugin Install oracle101-docs-plugin - Searches the Oracle 10.1.0 manuals.
Oracle 10.2.0 Documentation Search Plugin Install oracle102-docs-plugin - Searches the Oracle 10.2.0 manuals.

This information is taken from http://www.oracle-base.com

Tuesday, February 14, 2006

Standalone Reports Server on 10g R2 Developer

If you tried to install report server as NT service in 10gAS R2(10.1.2.0.2) , when you ran following command at command prompt;

rwserver -install autostart=yes

you will get the error message saying that "Please consult the installation guides for how to setup and run this program"
Beginning with Oracle Reports 10g Release 2 (10.1.2), running Reports Server as a Windows service is no longer supported ( rwserver -install server_name). As a result, the related command line keywords INSTALL and UNINSTALL are also obsolete.

Start or stop a Reports Server registered with Oracle Enterprise Manager 10g only through Oracle Enterprise Manager 10g/OPMN. OPMN automatically restarts Reports Server if it stops responding for some reason. On Windows, OPMN itself is run as a Windows service. Start the Reports Server as a standalone server on Windows using the following command:

rwserver server=server_name

Add the BATCH command line keyword to start up the server without displaying dialog boxes or messages.

rwserver server=server_name batch=yes

Monday, February 13, 2006

J2EE vs .NET: Where Is Application Development Going?

J2EE vs .NET: Where Is Application Development Going?
by Duncan Mills Outlines The Rise and Rise of the Meta-Framework

Where is application development going? What's the next cool thing? You may have answers to these questions, your answers may be the same or different to mine or anyone else's. The point is we just don't really know, and that's a problem. Saying to the manager of enterprise development shops "Oh yes just standardize on J2EE and everything will be fine" is not going to cut it. These folks are savvy enough to know that J2EE is a minefield of choice in standards and APIs. They need and deserve more direction than that. So you can make a suggestion as to a good set of technologies to use in a particular scenario - let's say Toplink, Lucene, Struts and JSP, as a random example - but of course there's a catch. You've just flagged a whole bunch of different technologies and APIs, each with a learning curve, each with different download locations or vendors and possibly conflicts in the APIs they consume. This is, I think, why .NET presses a lot of the right buttons. It's a Meta-Framework - a one stop shop. Say to a development manager you just have this one thing to do everything you need, and of course it's going to be attractive, irrespective of what the reality might be under the covers. There is no doubt that there are a lot of fantastic point solutions and frameworks out there in the J2EE world, but as standalone islands of functionality they have a much harder sell in the corporate market. If we look for frameworks that have been successful and widely adopted and examine them to see what gives them the edge what will we find? Take Struts for instance (love it or hate it). I'd be hard pressed to call Struts a meta-framework by my current thinking, but for it's time, it was. It wasn't just a collection of taglibs, that's what everyone was doing, it was taglibs plus a front controller, and it evolved to encompass validation as well. Struts became a worthwhile skill because with that one notch in you belt you can tackle a good chunk of an applications development.

What Defines a Meta-Framework today?

Broad Scope - the framework needs to cover everything from UI creation and page flow controller functionality to integration with multiple service providers including EJB, Web services, POJO and so on. It's not just a vertical slice through. Pluggability - the flexibility within the meta framework to incorporate choice into the stack. There is no reason at all that a meta-framework cannot encapsulate existing best of breed service providers, this is particularly true in an area like O/R mapping where I might want to use EJB, I might want to use a POJO based Toplink solution, or something totally new might come along. Just give me the choice (but feel free to offer best practice solutions).

Coexistence - Given that it's unlikely that a meta-framework will be able to implement everything itself it's going to be in turn a consumer of service frameworks - this is implicit in the pluggability argument. This in turn implies that the coupling between services within the framework has to be loose otherwise the pluggability dream cannot be fulfilled. Also, however, it imposes a degree of responsibility in the provider of the meta-framework.

Someone has to test all this stuff together, are there classloading problems for instance, do all these components share the same version of key APIs and so on. If you construct you own bespoke architecture this is something you'd have to worry about. If you consume a meta-framework one of the things you should be getting is this certification. Of course that might mean that components within a meta-framework are not absolute cutting edge within a specific genre, but do developers want cutting edge or do they want assured working?


Abstraction
- where you have choice you need abstraction. If I want to swap out my O/R mapping layer I don't want to have to adopt a whole new set of APIs at the binding or transactional glue points. The meta framework needs to add value here, standards such as the common databinding proposed by JSR 227, are ideally placed to provide this type of plumbing.

I not dumb enough to suppose that swapping out is actually common within an individual application, but the point is the same skillset can be reused between projects or where a project has a heterogeneous set of providers. Abstraction generally is where meta-frameworks can add the most value because it leads into the next point - longevity.

Longevity - APIs change, this is a fact of life, Frameworks provide a level of abstraction on top of the base platform APIs and a degree of insulation from that change as a result. But frameworks change too with time, particularly active community driven ones. Meta-frameworks can add another layer of abstraction and programmer insulation on top of this shifting morass. You code to the Meta-framework and the plumbing in is handled for you, as the sands shift, the meta-framework adapts to that on your behalf. Can this work in reality? Well yes, certainly in the world of propriety frameworks we have environments like Oracle Forms which have persisted and evolved for almost 20 years, bringing code forward from VT terminals, through Windows GUIs to the Web, essentially unchanged, although perhaps enhanced as more capabilities appeared.

This then is the major carrot that meta-frameworks can offer to enterprise development shops - stability, but without stagnation. A meta-framework has to evolve and offer it's consumers the latest and greatest whilst at the same time maintaining a rock solid foundation for existing applications.


Tooling
- Meta-frameworks will often be based around both coded abstraction layers and a large amount of in-metadata configuration. As such having tools support is an important part of the picture. Tooling can add yet another layer of abstraction through alternative visualizations such as diagramming or GUI screen designers. This helps with the whole future proofing issue..
But Why Now?

Why should you believe that meta-frameworks have any traction? We've not really seen any SOA-like marketing buzz around such frameworks, no great vendor splashes. Well, I think now is the time because it's happening in a stealthy and underhand way in any case. If we ignore the vendors for a second and just look at the standards and trends: What's the big trend at the moment? - well POJOs and IOC, think EJB 3.0, think Spring, you name it - loose coupling in other words.

I also think that the JavaServer Faces (JSF) standard is also a key player here. JSF offers an abstracted UI definition and event handling model that can be run across multiple devices. That's a large chunk of meta-framework right there. If I learn JSF I can code for mainstream browsers, handhelds and even industrial telnet devices with a single skill set. That works for me!

Where Are The Meta-Frameworks?

We've see that Microsoft can do it with .NET, but they have the luxury of almost total control. Are fully fledged meta-frameworks possible in the open standards J2EE space?

Well yes I think it can be done, many frameworks aspire, but most of those that do so do not have the scope of a true meta-framework. Maybe they only support one UI technology, or only allow EJBs for persistence. That's not good enough, a framework must be adaptable and willing to evolve to drink the soup du jour, but of course do that in a balanced and supportable way. To date I'd say that to varying degrees, the Oracle ADF framework and the Spring framework are closest in exhibiting most of the essential meta-framework attributes. Keel is also out there in this space but is lacking traction and is unlikely to be that attractive to large enterprises.

Meta-frameworks then, have the potential to offer exactly what the large enterprise shops need: a certified technology stack with the flexibility to meet the majority of requirements, and the promise of a lifetime that matches the application being built with it. I think it's inevitable that meta-frameworks are in the the domain of the commercial vendors (and I include in the grouping the vendors operating on a Service basis for open source as well as the paid-for-product vendors). Maintaining a meta-framework is a long term and expensive commitment, it's going to have to be paid for, either through license costs on the framework itself, or through support/service costs. It's also got to be backed by companies that stand a chance of being around for the required timescales.
The vendors though, are out there and ready to jump into this space. The meta-frameworks are coming...

Sunday, February 12, 2006

Close all forms with one button

When you have multiform application with several open Form modules they are organized within one MDI parent runform window. It's standard behavior on Win32 that when user press 'Close' button on MDI parent window all open child windows starts to close. However, pressing 'Close' button on MDI parent Runform window will cause only the current form to be closed. In order to achieve similar functionality to standard one in MDI Runform session to we need to have following code in every involved form module;

WHEN-NEW-FORM-INSTANCE trigger:

default_value('false','global.closing');


WHEN-WINDOW-ACTIVATED trigger:

if :global.closing='true' then

exit_form;

end if;

Now, it is up to Forms developer from where he/she triggers this closing process. A special toolbar button or menu item for this purpose can be used with code:

:global.closing := 'true';

exit_form;

and all open forms within the current Forms MDI Runform parent window starts to close.


Friday, February 10, 2006

Using the COPY and NAME_IN built-in

Using the COPY and NAME_IN built-in functions in Forms isn't explained that well in the documentation. But they can be very useful in making your forms more generic by enabling you to build up field names dynamically and subsequently set and/or get the field values. In addition with the COPY function its possible to programatically insert non-numeric characters such as '%' into numeric fields, thus allowing wild-card searches to be performed.


NAME_IN

The name_in function allows you to get the value of a variable which itself is held as a variable name. Consider a form with two text fields on it - field1 and field2. Now into field1 enter the string 'Hello' and into field2 enter the string 'field1'. Now if you do a message(name_in(:field2)) it will display the string 'Hello'. As another example suppose you want to know what the value contained in the current form item is.
You look at the help and see that there is system variable called current_item. Great, I'll just message this out, however you'll soon discover that system.current_item is the name of the current item - not what it contains. To get at the value of the current item just enclose it within the name_in built-in - name_in(:system.current_item)


COPY


COPY is the complement of the NAME_IN function in that it allows you to set the value of a variable which itself is held as a variable. For example suppose you wish to set the value of the current item to the string 'Hello' - assuming the current item will be a text field. You can't simply do

:system.current_item:='Hello' ;

as Forms disallows this. But you can do a

COPY('Hello',:system.current_item);

Often you may have to dynamically create variables holding the name of fields on your form and set them to some value. Say you have 5 blocks - block1, block2 etc… all containing a text field of the same name - stock_id. To set the value of the stock_id field that the cursor is currently on you might use code like:-

COPY(' IBM',:system.current_block||'.stock_id');

One other use that COPY has is to place non-numeric characters into numeric fields programmatically during enter-query mode. Why would you want to do that? Mostly to allow the placing of wildcard characters such as '%'. You'll find that if you simply try:-

:num_field := '123%'

Forms will issue an error message

However doing COPY('123%',:num_field) works OK.

This only works during entry-query mode and you only need to use this if you have to enter the characters programmatically. You can just type in such characters normally if required.

Oracle Technical Interview Questions Answered

1. Explain the difference between a hot backup and a cold backup and the benefits associated with each.

A hot backup is basically taking a backup of the database while it is still up and running and it must be in archive log mode. A cold backup is taking a backup of the database while it is shut down and does not require being in archive log mode. The benefit of taking a hot backup is that the database is still available for use while the backup is occurring and you can recover the database to any point in time. The benefit of taking a cold backup is that it is typically easier to administer the backup and recovery process. In addition, since you are taking cold backups the database does not require being in archive log mode and thus there will be a slight performance gain as the database is not cutting archive logs to disk.

2. You have just had to restore from backup and do not have any control files. How would you go about bringing up this database?

I would create a text based backup control file, stipulating where on disk all the data files where and then issue the recover command with the using backup control file clause.

3. How do you switch from an init.ora file to a spfile?

Issue the create spfile from pfile command.

4. Explain the difference between a data block, an extent and a segment.

A data block is the smallest unit of logical storage for a database object. As objects grow they take chunks of additional storage that are composed of contiguous data blocks. These groupings of contiguous data blocks are called extents. All the extents that an object takes when grouped together are considered the segment of the database object.

5. Give two examples of how you might determine the structure of the table DEPT.

Use the describe command or use the dbms_metadata.get_ddl package.

6. Where would you look for errors from the database engine?

In the alert log.

7. Compare and contrast TRUNCATE and DELETE for a table.

Both the truncate and delete command have the desired outcome of getting rid of all the rows in a table. The difference between the two is that the truncate command is a DDL operation and just moves the high water mark and produces a now rollback. The delete command, on the other hand, is a DML operation, which will produce a rollback and thus take longer to complete.

8. Give the reasoning behind using an index.

Faster access to data blocks in a table.

9. Give the two types of tables involved in producing a star schema and the type of data they hold.

Fact tables and dimension tables. A fact table contains measurements while dimension tables will contain data that will help describe the fact tables.

10. . What type of index should you use on a fact table?

A Bitmap index.

11. Give two examples of referential integrity constraints.

A primary key and a foreign key.

12. A table is classified as a parent table and you want to drop and re-create it. How would you do this without affecting the children tables?

Disable the foreign key constraint to the parent, drop the table, re-create the table, enable the foreign key constraint.

13. Explain the difference between ARCHIVELOG mode and NOARCHIVELOG mode and the benefits and disadvantages to each.

ARCHIVELOG mode is a mode that you can put the database in for creating a backup of all transactions that have occurred in the database so that you can recover to any point in time. NOARCHIVELOG mode is basically the absence of ARCHIVELOG mode and has the disadvantage of not being able to recover to any point in time. NOARCHIVELOG mode does have the advantage of not having to write transactions to an archive log and thus increases the performance of the database slightly.

14. What command would you use to create a backup control file?

Alter database backup control file to trace.

15. Give the stages of instance startup to a usable state where normal users may access it.

STARTUP NOMOUNT - Instance startup

STARTUP MOUNT - The database is mounted

STARTUP OPEN - The database is opened

16. What column differentiates the V$ views to the GV$ views and how?

The INST_ID column which indicates the instance in a RAC environment the information came from.

17. How would you go about generating an EXPLAIN plan?

Create a plan table with utlxplan.sql.

Use the explain plan set statement_id = 'tst1' into plan_table for a SQL statement

Look at the explain plan with utlxplp.sql or utlxpls.sql

18. How would you go about increasing the buffer cache hit ratio?

Use the buffer cache advisory over a given workload and then query the v$db_cache_advice table. If a change was necessary then I would use the alter system set db_cache_size command.

19. Explain an ORA-01555

You get this error when you get a snapshot too old within rollback. It can usually be solved by increasing the undo retention or increasing the size of rollbacks. You should also look at the logic involved in the application getting the error message.

20. Explain the difference between $ORACLE_HOME and $ORACLE_BASE.

ORACLE_BASE is the root directory for oracle. ORACLE_HOME located beneath ORACLE_BASE is where the oracle products reside.

Well, we have gone through the first 25 questions as I would answer them during an interview. Please feel free to add your personal experiences to the answers as it will always improve the process and add your particular touch. As always remember these are "core" DBA questions and not necessarily related to the Oracle options that you may encounter in some interviews. Take a close look at the requirements for any job and try to come up with questions that the interviewer may ask. Next time we will tackle the rest of the questions. Until then, good luck with the process.


Solaris Logadm

Solaris 9 has added a new command called logadm which is useful in managing Oracle log files as well as any other log files that are used on your system. I don't know whether other flavours of UNIX have similar commands or not.

The command is '/usr/sbin/logadm'. The default configuration file is '/etc/logadm.conf', but a user configuration file can be specified.

The way I have used this is to create a cron job as below. This uses an Oracle specific config file.
====================
0 01 * * * "/usr/sbin/logadm -f /opt/apps/oracle/utils/logadm/logadm.conf"
====================

Then I created the config file as below. The configuration file must be left writable, as the logadm updates the command with the -P option to keep track of last processed timestamps
====================
#
/oracle/admin/GANDALF/bdump/alert_GANDALF1.log -C 3 -c -p 1w -z 1
/oracle/product/10.1.0/db/network/log/listener.log -C 3 -c -p 1w -z 1
/oracle/product/10.1.0/db/network/log/sqlnet.log -C 3 -c -p 1w -z 1
====================
Using the switch '-c' means that 'logadm' is able to process the listener.ora file without problems due to locking.

For further details consult the 'logadm' man page.

Thursday, February 09, 2006

Rollback Segments

Each time Oracle makes a change to schema data it records the information required to undo that change in a special type of database area called a rollback segment. This information is always kept at least until the transaction making the change has committed, but as soon as the transaction is complete its rollback or undo data can be overwritten. How soon this happens depends on how much undo space is available and how quickly current and future transactions create new undo records. Within a few seconds, or minutes, or hours the undo information will be overwritten or, in some cases, simply discarded. Since the introduction of Oracle Version 6 in 1988 the allocation of rollback segment space has been a major concern for Oracle DBA's who have had to decide both how many rollback segments an instance should have and how large each one should be. Resolving this issue has typically required a number of compromises that are outside the scope of this post.

Oracle9i supports the traditional rollback segment management features that have evolved over the past 13 years, but also introduces Automatic Undo Management. In this mode the DBA only has to create an "undo tablespace", tell Oracle to use this tablespace, and specify for how many seconds each undo record must be retained. The records will, of course, be kept for longer if the transaction that creates them does not commit within the time interval. In Oracle9i the following three instance parameters will guarantee that all undo entries will remain available for 15 minutes:

undo_management = AUTO
undo_retention = 900 # seconds
undo_tablespace = UNDOTBS

However a potentially unwanted side effect is that the Oracle server will not retain the data for much longer than the time specified even if the instance is running with a relatively light updating load i.e. even if there is no great demand to write new undo information. This contrasts markedly with traditional rollback segment management, where under light updating loads undo entries could (and would) remain available for several hours to generate read consistent data sometimes required by long running reports. Fortunately the instance parameter undo_retention can be altered dynamically using alter system set and this may become necessary at sites which have long report runs take place and cannot completely prevent update from occurring while these reports are running.

Wednesday, February 08, 2006

ADF JClient: Creating monolithic jar files to run JClient applications

The following information will save mylife :) I do not know where I got it (therefore sorry for not referencing it), but it would be useful if you think to make your code as an standalone jar file, having lots of different types of external libraries.

Within JDeveloper 10g it doesn't seem to be obvious on how to deploy a JClient application within a monolithic Java archive file that then can be used to run the JClient application on the client using java -jar The assumption that is made by the default deployment setting is that all dependency libraries are part of the classpath on the deployment platform, which means that only the application specific classes need to be deployed in the application's archive file. This assumption makes sense if you have more than one ADF JClient application that runs on the local client. To deploy JClient application so they run stand alone out of a jar file, with no additional setup required on the client machine, you need to add all the dependency classes to the application deployment jar, which quickly can become 17 MB in size due to this. Do as follows:

1) In the ADF JClient project, create a new JAR File deployment profile.

2) In the profile settings, for the JAR options, specify the name of the runnable class (to make a runnable jar file)

3) Create a new "Dependency Analysis" File group.

4) Select all the libraries from the Libraries tab on the Contributors page.

5) Make sure "Include Contents in Output" is checked. Otherwise the jar files of this libraries will be added, which is of no help at runtime.

6) Deploy the application to the jar file

7) Run the jar file, e.g: java -jar archive1.jar

Monday, February 06, 2006

Helpful hints for Designer

Process Modeler

  1. To display the Flow Name automatically, right click in the swim lane, select Customize Graphics, and click the Display Flow Name (On Create) checkbox.
  2. To increase/decrease the width of the swim lane, select the organizational unit, press shift + down arrow or shift + up arrow respectively.
  3. To change a process to a decision point and denote by using a diamond, right click in the swim lane, select customize graphics, select Decision Point from the drop down list, select Enhanced Symbol from the Mode group box. You will need to change each Process Modeler Diagram that will have a decision point.
  4. After a process, etc. is created, it is automatically added to the repository even if you have not saved the diagram. Therefore, if you make a mistake and the process, etc. should not be included, you must Delete it from the repository. If you Cut a process, etc. from a diagram, this merely removes it from the diagram, it does not delete it from the repository. Be careful how you use Cut and Delete.

Entity Relationship Diagram

  1. If your diagram is large and you would like to view only the entities and their relationships, you can opt not to display the attributes by selecting, Options> Customize from the menu bar. Click the Attributes checkbox in the View group box to remove the checkmark.
  2. When creating a relationship, to straighten the line between them, click the line and then use the up and down arrows or left and right arrows depending upon if the line is vertical or horizontal. Another option is to select Options>Customize from the menu bar and click the Snap checkbox in the Grid group box. You can also display the grid by clicking the Display checkbox in the Grid group box.
  3. To improve the readability and understanding of your relationships, you may need to create dog-legs or angled lines. After the initial relationship is created between two entities, you can click on intermediate points to create angled lines. To get the desired angle, you hold down the shift key and click the middle of the line to create a drawing point. You can then drag this drawing point to the desired spot. To remove the drawing point, press the shift key and click on the point again.

Repository Object Navigator (RON)

  1. To insert documents such as Word or email, from the RON navigator expand Reference Data Definition heading, highlight Documents and click the + (Create Object) button on the left side. The Document Properties window will open. Enter a document name, author, type, comment, and other information as necessary. Under the Documentation heading, single click the yellow icon located to the left of Document Text, a text pad window opens. Type information or copy and paste from another document and then save it.

Repository Reports

To generate a report with an html format that can be viewed by a client or non-Designer type person via a browser, select the required report in the navigator and then the Parameters Palette window will open. Set the Destination Type to be File, enter the LAN path into the Destination Name with .htm as the extension, change the Destination Format to html, and the Mode should be bitmap. Run the report.

Friday, February 03, 2006

Setup UNIX Sendmail to Access SMTP Gateway

The steps below are relevant to Sun SOLARIS Servers running Solaris 2.6 or 2.8, consult your System Administration manual for details on how to perform this for other hardware vendors / operating systems.

1. If the sendmail daemon is currently running on your system, terminate it with the following command:

/etc/init.d/sendmail stop

2. Copy /etc/mail/main.cf to /etc/mail/sendmail.cf

3. Edit /etc/hosts and place an entry here for the SMTP gateway machine, e.g:

1.2.3.4 mailhost

4. To test connectivity to the SMTP machine called mailhost, enter the following command:

telnet mailhost 25

This will initiate a telnet session with the mailhost machine on port 25, which is the port that the SMTP daemon listens for incoming messages.

5. Edit the /etc/mail/sendmail.cf file and edit the following entries:

Change Dmsmartuucp to Dmether

This changes the mailer program for remote mail delivery from uucp to the smtp mailer.

Change DR ddn-gateway to DR mailhost

Change CR ddn-gateway to CR mailhost

This changes the behavior of the sendmail daemon to route all remote mail generated from this server to be directed at the SMTP host you defined in /etc/hosts

6. Save the sendmail.cf configuration file

7. Start the sendmail daemon by issuing the following command:

/etc/init.d/sendmail start

8. Run $POM_TOP/bin/MasterScript.sh stop

9. Add /usr/lib to the PATH in the .profile file and activate it

10. Run $POM_TOP/bin/MasterScript.sh start apps/apps

Wednesday, February 01, 2006

tip : dbms_random

SELECT dbms_random.string('U', 2)||TRUNC(dbms_random.VALUE(1000, 9999))
FROM dual;

Output : au3910

SELECT dbms_random.string('U', 2)||TRUNC(dbms_random.VALUE(1, 9))||
dbms_random.string('U', 2)||TRUNC(dbms_random.VALUE(1,9))||
dbms_random.string('U', 2)
FROM dual;

Output : AI7KU4EE

Monday, January 30, 2006

Meaning of Oracle Release Numbers

Have you know what is the meaning of oracle 9.2.0.1.0 or 9.2.0.4.0?

Oracle release numbers consist of 5 digits - for example Oracle9i.9.2.0.1.0 means First digit is the major version number (8), second is the release number (1), third is the maintenance release number (7), fourth is the generic patch number (3) and fifth is the platform specific patch set number (2).

Sunday, January 29, 2006

Report changes from alert.log file (unix server only)

In the morning yo want to check the alert.log file, but only for the changes. There is some work required to determine when udid you last time look at the file, where to start reading etc. Also on Monday you want to report since Friday etc.

The following commands will extract only the changes since the last time you have run the report:

# get the log filename
log=$ORACLE_BASE/admin/$ORACLE_SID/bdump/alert_${ORACLE_SID}.log

if [[ -f $log.last ]] then
#get the last reported line
today=`cat $log.last`
else
#find the last date in the file
lastdt=`tail $log|grep :|grep 200|tail -1|cut -c-11`

#get the first occurrence of that date
today=`grep -n "$lastdt" $log|head -1|tr ':' '\012'|head -1`
fi

#remember the last line:

echo `wc -l $log`>$log.last


#today's file
newlog=/tmp/mon_alert_${ORACLE_SID}.log

#extract only today's data
sed -n $today,'$'p $log|tr '\09' ' '>$newlog

# the rest is up to you.



Alternatively, you could run a cron job to get the number of lines:

# get the log filename
log=$ORACLE_BASE/admin/$ORACLE_SID/bdump/alert_${ORACLE_SID}.log
echo `wc -l $log`>$log.last


#In the morning you just report data from the file:
#today's file
newlog=/tmp/mon_alert_${ORACLE_SID}.log

#extract only today's data
sed -n $today,'$'p $log|tr '\09' ' '>$newlog

Look for anything of interest in the $newlog file

Friday, January 20, 2006

Unix Core Files

When you are running a program on a UNIX machine that terminates abnormally, possibly because it crashed, the operating system creates a core file in the current working directory. The system dumps information into the core file about what the program was doing at the time it crashed, e.g., which subroutine it was calling or what memory address it was accessing. The format of the information in this file is pretty cryptic, but the file can be post-processed by an appropriate debugging tool to give application developers useful information. If you are not trying to diagnose a bug in an application, the file contains no other useful information and can be safely deleted. In fact, it should be deleted because often it is quite large (>100 MB).

How can you prevent the core file from getting so large? One option is to create a directory named core in your current working directory. When the application crashes, the operating system cannot write the core file because a directory with the name already exists. A second option is to set a limit on how big the core file is allowed to grow using the command

limit coredumpize xxxx

xxxx is the maximum size in kB that the core file is allowed to be. In theory, the core file will never exceed the limit. However, in practice, some operating systems ignore it. This command has been incorporated into the standard NML .cshrc file.

Tuesday, January 17, 2006

oraenv and coraenv Utilities

The oraenv and coraenv utilities both aid in setting the Oracle environment on UNIX systems (other utilities exist on Windows platform that enable the Oracle Home to be set.) The coraenv utility is appropriate for the UNIX C Shell; oraenv should be used with either the Bourne or Korn shells.

Database operations require the ORACLE_HOME to be set before the user may access the database. If ORACLE_HOME is not set, commands such as sqlplus, exp, or any other utility for that matter, will not be found.

Both utilities are shell scripts that do the same thing in the different UNIX shells. They will prompt for a SID of the database unless ORAENV_ASK is set to N. The utility will also append the ORACLE_HOME value to the path, marking the location of the utility.

The oraenv command will prompt for the SID of the database that you wish $ORACLE_HOME to access.

$ . oraenv
ORACLE_SID = [] ? ASG920

The dbhome utility can now be used to verify that $ORACLE_HOME is correct.

$ dbhome
/usr/oracle/9.2.0

The “dot space” part of the command is required to make the environment change with the parent shell, as opposed to entering a command without it which would only affect the subshell running that process.

These commands can be used to avoid specifying the network service name when issuing commands. For instance, without using oraenv, an sqlplus command would look like:

$ sqlplus system/manager@nameofservice as sysdba

whereas after oraenv has been executed, the following command would work:

$ sqlplus system/manager as sysdba

Saturday, January 14, 2006

Starting and Stopping on Windows

The dbstart and dbstop shell scripts do not exist on Windows platforms. Consequently Oracle database startup and shutdown is implemented completely differently. The oradim utility is used on the Windows platform to perform these tasks.

C:\oracle9i\bin\oradim -startup -sid ORCL92 –usrpwd manager
-starttype SRVC,INST -pfile C:\oracle9i\admin\ORCL92\pfile\init.ora

  • startup — Indicates that the specified instance should be started.
  • sid — The SID of the database to start.
  • usrpwd — The password for the database user.
  • starttype — Specifies whether to start the instance, the service, or both (SRVC, INST).

The following command can be used to shutdown the instance with oradim:

C:\oracle9i\bin\oradim -shutdown -sid ORCL92 -shutttype SRVC,INST
–shutmode A

Notice that no password is needed to perform this task.

The shuttype parameter specifies what is to be stopped – the service (SRVC), the instance (INST), or both (SRVC, INST). The shutmode specifies the method that should perform the shutdown – (A)bort, (I)mmediate, or (N)ormal.

Each operation, regardless of success, is logged in the oradim log file (ORACLE_HOME\database\OraDim.Log). This file should be checked for errors after each oradim command is executed.

The oradim utility provides more than just the ability to start and stop Windows databases. oradim can create and edit databases. It also allows DBAs to configure script-based installation mechanisms, bypassing the Oracle Database Configuration Assistant’s graphical user interface (GUI).

For a reference of all oradim commands, use the oradim–help command.

Thursday, January 12, 2006

Automatic Startup & Shutdown

The following list summarizes the functions performed by the different Oracle startup and shutdown scripts (which are invoked during system startup and shutdown, respectively).

Script

Description

$ORACLE_HOME/bin/dbstart

Ensures a clean startup of database instance(s), even after system failure

$ORACLE_HOME/bin/dbshut

Ensures a clean shutdown for database instances

/var/opt/oracle/oratab

Contains a field that specifies whether a particular database instance should be brought up/down at system startup /shutdown time by specifying "Y" in this field, the "dbstart" and "dbshut" scripts bring this database and instance up or down.

/etc/inittab

Controls the initialization process




Relevant Files

Description

/etc/init.d/dbstart

Script to call $ORACLE_HOME/bin/dbstart

/etc/init.d/dbshut

Script to call $ORACLE_HOME/bin/dbshut

/etc/rc2.d/S99dbstart

Link to the script /etc/init.d/dbstart

/etc/rc0.d/K01dbshut

Link to the script /etc/init.d/dbshut

$ORACLE_HOME/lsnr_start.sh

Script to start listener

$ORACLE_HOME/lsnr_stop.sh

Script to stop listener

System V initialization scripts are contained in /etc/rc.d directories where "n" is the run-level value of the script. A run-level of 0 usually signifies power shutdown mode, while a run-level of 2 signifies multi-user mode. The directories contain initialization scripts such as "S75cron" and "K30tcp". These scripts are named using the following method:

[K or S][two-digit number][descriptive filename]

Names starting with "S" indicate scripts that are called at startup; names starting with "K" indicate scripts that are called at shutdown time.

Scripts containing larger numbers in their names are executed after those with lower numbers. Oracle startup scripts typically contain larger numbers in their names, such as "S99oracle", indicating that the script should be run after the system has been started up. Oracle shutdown script names, on the other hand, usually contain smaller numbers, such as "K01oracle" indicating that the script should be run before system shutdown.

Every Oracle installation has an 'oratab' file which contains an entry for every database in the system. You will need 'root' privileges to accomplish the following:

STEPS

1. Edit the 'oratab' file (/var/opt/oracle/oratab):

mlive:/export/home/app/oracle/product/8.1.6:Y

2. As oracle user, create the following shell script files “lsnr_start.sh” and “lsnr_stop.sh” in the $ORACLE_HOME directory and ensure that both files are executable by the oracle userid. Since all files “lsnrctl” needs are not in the default directory, those scripts that include the needed information have to be used to start listener:

lsnr_start :

PATH=/export/home/app/oracle/product/9.2.0.1.0/bin:.;

export PATH

ORATAB=/var/opt/oracle/oratab;

export ORATAB

ORACLE_HOME=/export/home/app/oracle/product/9.2.0.1.0;

export ORACLE_HOME

lsnrctl start

lsnr_stop :

PATH=/export/home/app/oracle/product/9.2.0.1.0/bin:.;

export PATH

ORATAB=/var/opt/oracle/oratab;

export ORATAB

ORACLE_HOME=/export/home/app/oracle/product/9.2.0.1.0;

export ORACLE_HOME

lsnrctl stop

3. As root, create the following OS script files dbstart and dbshut in the /etc/init.d directory and ensure that both files are executable by the oracle userid:

dbstart :

su oracle -c /export/home/app/oracle/product/9.2.0.1.0/bin/dbstart

su oracle -c /export/home/app/oracle/product/9.2.0.1.0/lsnr_start.sh

dbshut :

su oracle -c /export/home/app/oracle/product/9.2.0.1.0/bin/dbshut

su oracle -c /export/home/app/oracle/product/9.2.0.1.0/lsnr_stop.sh

4. As a root, link the dbshut and dbstart to /etc/rc*.d directory:

# ln -s /etc/init.d/dbshut /etc/rc0.d/K01dbshut

# ln -s /etc/init.d/dbstart /etc/rc2.d/S99dbstart


Relevant Oracle Metalink Documents

1005041.6 How to Automatically Start the Listener On UNIX

61333.1 Automatic startup and shutdown of oracle instances

105957.1 How to setup Oracle to startup automatically if system is restarted

1068670.6 Listener will not start autostart at boot time

1031930.6 Solaris RC&RC.D startup&shutdown scripts

91815.1 The Annotated dbstart Script

Thursday, January 05, 2006

Availability of the DB and listener

Under the “/export/tmscs/db_cron” folder, “db_check.sh” script seen below is created.

db_check.sh

#!/bin/sh
#
############################
# FUNCTION: IsListenerRunning
############################
#
IsListenerRunning()
{
ps -ef | egrep '/bin/tnslsnr' | egrep -v 'grep' > /dev/null 2>&1
}

############################

# FUNCTION: IsListenerUp

############################

#

IsListenerUp()

{

lsnrctl status > /dev/null 2>&1

}

############################

# MAIN PROGRAM

############################

#

CHECK_LISTENER=1

CHECK_LISTENER_IS_UP=1

SPOOLFILE=/export/tmscs/db_cron/db.log

#

# check if database is running

#

if [ `echo "select dummy||'OK' from dual;" | sqlplus -s system/manager | grep XOK` ]

then

IS_OK="XOK"

else

IS_OK="FALSE"

fi

#

# check if listener is running

#

IsListenerRunning || CHECK_LISTENER=0

#

# check if listener is up#

IsListenerUp || CHECK_LISTENER_IS_UP=0

date +"Date is %D %nTime is %T" > $SPOOLFILE



if [ $IS_OK = "XOK" ]

then

echo "SELECT 'Database has been up and running for '|| round(24*(SYSDATE-logon_time),1) || ' hour(s)' FROM sys.V_\$SESSION WHERE sid=1;" | sqlplus

-s system/manager | grep running >> $SPOOLFILE

else

echo "Database is down" >> $SPOOLFILE

fi

if [ $CHECK_LISTENER = 0 ]

then

echo "Listener is not Running" >> $SPOOLFILE

fi

if [ $CHECK_LISTENER_IS_UP = 0 ]

then

echo "Listener is down" >> $SPOOLFILE

lsnrctl start && echo "Listener is started succesfully" >> $SPOOLFILE

else

echo "Listener is up" >> $SPOOLFILE

fi
If listener is not running, script will start it. But if database is down, script will do nothing.

The crontab jobs executing this script is created under the user ROOT, and since ROOT user has already several crontab jobs, the new job line is added to the crontab file and then this file is submitted to the system (/var/spool/cron/crontabs/root);

15,30,45,01 * * * * su - oracle -c "/export/tmscs/db_cron/db_check.sh" > "/export/tmscs/db_cron/err.log"

And then to submit the file;

crontab crontab_filename

Essentially, root is the super user and can thus impersonate oracle without a password. su oracle will enable root to become oracle. The -c means 'execute the following command as oracle'. The extra '-' between su and oracle is crucial - because of this, the script is being executed as if the user oracle had logged in himself. In other words all environment variables like $ORACLE_HOME, $ORACLE_SID and $PATH are set correctly.

This job runs every fifteen minutes.

After running the script, the crontab job will produce 2 files (db.log and err.log) under the folder “/export/tmscs/db_cron”

db.log

It gives information about database and listener.

If database and listener are running;

Date is 01/07/03

Time is 10:12:57

Database has been up and running for 119.4 hour(s)

Listener is up

If database is down and listener is running;

Date is 01/07/03

Time is 13:53:43

Database is down

Listener is up

If database is up and listener is down;

Date is 01/07/03

Time is 13:54:37

Database has been up and running for 0 hour(s)

Listener is up

If both database and listener is down;

Date is 01/07/03

Time is 13:45:01

Database is down

Listener is not Running

Listener is down

err.log

$ more err.log

PATH is /usr/bin:

PATH is

/export/home/app/oracle/product/

9.2.0.1.0/bin:.:/usr/bin:.:/usr/local/bin

Sun Microsystems Inc. SunOS 5.7 Generic October 1998

You have mail.

Running Oracle .login ...

It shows the log of last process and if an error occurs, what the error is can be seen in this file;

Note that when the folder (/export/tmscs/db_cron) is changed both crontab and db_check.sh script files must be corrected.

Configuration :

  • Variable SPOOLFILE in db_check.sh file must be configured (now SPOOLFILE=/export/tmscs/db_cron/db.log)
  • Given Password for system user in db_check.sh file must be corrected (now system/manager)
  • Given path for db_clean.sql and err.log files must be corrected (now /export/tmscs/db_cron/)
  • All created scripts and folders must be owned by oracle:dba with the rights of “711”

Server and Database Naming

As servers and databases are deployed for global applications, it becomes critical that server and database names are standardized globally. This document provides the guidelines that should be followed to name servers and databases for global applications. The naming conventions detailed in this document are provided as a formula. Certain components of the formula have been standardized so that they can be utilized very easily. However, there will be exceptions where the naming convention cannot be applied to a global application. It is then up to the GEOs to get together and agree on a naming convention that works for the specific application and that can be implemented on a global basis.

Server Naming Formula: application + function . domain

Database Naming Formula: application + function + geo specification

The following tables provide the recommended abbreviation standards to be used in naming servers and databases for global applications.

Examples :

FUNCTION ABBREVIATION
Disaster Recovery database DR
Disaster Recovery server DR or DBDR
Test database TST
Test server TST or TEST
Development DEV
Training TRN
Release REL
Test Reporting TRP
DBnet (off SWAN) DBX
Middle tier MT Middle tier (partner) MTP
Web server WS

GEO DOMAIN
APAC singapore; japan
EMEA holland; turkey
Americas central; east

Notes:

  1. Oracle database names (SIDs) typically should be uppercase.

  2. There is no hard limit to the length of a server name. It is recommended that server names no more than 8-14 characters, if possible.

  3. Server names typically should be lower case.

  4. Server names can include a hyphen to separate the application and function components of the server name. However, it has been discovered that a hyphen in the server name causes a problem when the Oracle database init.ora parameter global_names is set to TRUE and the db_domain parameter is set to the server name. To avoid this problem, do not use a hyphen in the server name.

  5. The domain component of the server name will always be separated with a period ('.') from the application and function components.

  6. In applications where there will be multiple servers of the same function, sequence numbers can be used. As an example, Radiance has 3 different servers hosting the middle tier. They are named rad-mt01.domain, rad-mt02.domain, rad-mt03.domain.

  7. Oracle database names (SIDs) are limited to 8 charactersApplication component of Oracle database name can be up to 3 characters

    1. Application component of Oracle database name can be up to 3 characters
    2. Function component of Oracle database name can be up to 3 characters
    3. GEO specification is limited to 2 characters