Monday, February 27, 2006

Tip : Easy Connect String

EZCONNECT - Easy Connect String
You can avoid having to use a TNSNAMES.ORA file by specifying the full connect string at the command prompt. The hostname is the only mandatory piece.

sqlplus username/password@ [//] hostname [:port] [/service_name]

Eg. sqlplus scott/tiger@10gserver:1521/ORCL


Ex :sqlplus scott/tiger@:1521/

Set up EZCONNECT in the sqlnet.ora file as follows:

NAMES.DIRECTORY_PATH = (TNSNAMES, EZCONNECT)

This gives us easy and secure way of connecting to databases from client machine without leaving the DB information in TNSNAMES.ORA

This information is taken from http://www.dba-village.com/

Tuesday, February 21, 2006

Oracle Firefox Search Plugins

Mozilla Firefox

General Oracle Search Plugins

ORACLE-BASE.com Search Plugin Install oracle-base-plugin - Searches the ORACLE-BASE.com site.
OTN Search Plugin Install otn-plugin - Searches the Oracle Technology Network site.
Oracle Metalink Search Plugin Install oracle-metalink-plugin - Searches the Oracle Metalink site.

Oracle Error Search Plugins

Oracle 8.1.7 Errors Search Plugin Install oracle817-errors-plugin - Searches the Oracle 8.1.7 error messages manual.
Oracle 9.0.1 Errors Search Plugin Install oracle901-errors-plugin - Searches the Oracle 9.0.1 error messages manual.
Oracle 9.2.0 Errors Search Plugin Install oracle920-errors-plugin - Searches the Oracle 9.2.0 error messages manual.
Oracle 10.1.0 Errors Search Plugin Install oracle101-errors-plugin - Searches the Oracle 10.1.0 error messages manual.
Oracle 10.2.0 Errors Search Plugin Install oracle102-errors-plugin - Searches the Oracle 10.2.0 error messages manual.

Oracle Documentation Search Plugins

Oracle 8.1.7 Documentation Search Plugin Install oracle817-docs-plugin - Searches the Oracle 8.1.7 manuals.
Oracle 9.0.1 Documentation Search Plugin Install oracle901-docs-plugin - Searches the Oracle 9.0.1 manuals.
Oracle 9.2.0 Documentation Search Plugin Install oracle920-docs-plugin - Searches the Oracle 9.2.0 manuals.
Oracle 10.1.0 Documentation Search Plugin Install oracle101-docs-plugin - Searches the Oracle 10.1.0 manuals.
Oracle 10.2.0 Documentation Search Plugin Install oracle102-docs-plugin - Searches the Oracle 10.2.0 manuals.

This information is taken from http://www.oracle-base.com

Tuesday, February 14, 2006

Standalone Reports Server on 10g R2 Developer

If you tried to install report server as NT service in 10gAS R2(10.1.2.0.2) , when you ran following command at command prompt;

rwserver -install autostart=yes

you will get the error message saying that "Please consult the installation guides for how to setup and run this program"
Beginning with Oracle Reports 10g Release 2 (10.1.2), running Reports Server as a Windows service is no longer supported ( rwserver -install server_name). As a result, the related command line keywords INSTALL and UNINSTALL are also obsolete.

Start or stop a Reports Server registered with Oracle Enterprise Manager 10g only through Oracle Enterprise Manager 10g/OPMN. OPMN automatically restarts Reports Server if it stops responding for some reason. On Windows, OPMN itself is run as a Windows service. Start the Reports Server as a standalone server on Windows using the following command:

rwserver server=server_name

Add the BATCH command line keyword to start up the server without displaying dialog boxes or messages.

rwserver server=server_name batch=yes

Monday, February 13, 2006

J2EE vs .NET: Where Is Application Development Going?

J2EE vs .NET: Where Is Application Development Going?
by Duncan Mills Outlines The Rise and Rise of the Meta-Framework

Where is application development going? What's the next cool thing? You may have answers to these questions, your answers may be the same or different to mine or anyone else's. The point is we just don't really know, and that's a problem. Saying to the manager of enterprise development shops "Oh yes just standardize on J2EE and everything will be fine" is not going to cut it. These folks are savvy enough to know that J2EE is a minefield of choice in standards and APIs. They need and deserve more direction than that. So you can make a suggestion as to a good set of technologies to use in a particular scenario - let's say Toplink, Lucene, Struts and JSP, as a random example - but of course there's a catch. You've just flagged a whole bunch of different technologies and APIs, each with a learning curve, each with different download locations or vendors and possibly conflicts in the APIs they consume. This is, I think, why .NET presses a lot of the right buttons. It's a Meta-Framework - a one stop shop. Say to a development manager you just have this one thing to do everything you need, and of course it's going to be attractive, irrespective of what the reality might be under the covers. There is no doubt that there are a lot of fantastic point solutions and frameworks out there in the J2EE world, but as standalone islands of functionality they have a much harder sell in the corporate market. If we look for frameworks that have been successful and widely adopted and examine them to see what gives them the edge what will we find? Take Struts for instance (love it or hate it). I'd be hard pressed to call Struts a meta-framework by my current thinking, but for it's time, it was. It wasn't just a collection of taglibs, that's what everyone was doing, it was taglibs plus a front controller, and it evolved to encompass validation as well. Struts became a worthwhile skill because with that one notch in you belt you can tackle a good chunk of an applications development.

What Defines a Meta-Framework today?

Broad Scope - the framework needs to cover everything from UI creation and page flow controller functionality to integration with multiple service providers including EJB, Web services, POJO and so on. It's not just a vertical slice through. Pluggability - the flexibility within the meta framework to incorporate choice into the stack. There is no reason at all that a meta-framework cannot encapsulate existing best of breed service providers, this is particularly true in an area like O/R mapping where I might want to use EJB, I might want to use a POJO based Toplink solution, or something totally new might come along. Just give me the choice (but feel free to offer best practice solutions).

Coexistence - Given that it's unlikely that a meta-framework will be able to implement everything itself it's going to be in turn a consumer of service frameworks - this is implicit in the pluggability argument. This in turn implies that the coupling between services within the framework has to be loose otherwise the pluggability dream cannot be fulfilled. Also, however, it imposes a degree of responsibility in the provider of the meta-framework.

Someone has to test all this stuff together, are there classloading problems for instance, do all these components share the same version of key APIs and so on. If you construct you own bespoke architecture this is something you'd have to worry about. If you consume a meta-framework one of the things you should be getting is this certification. Of course that might mean that components within a meta-framework are not absolute cutting edge within a specific genre, but do developers want cutting edge or do they want assured working?


Abstraction
- where you have choice you need abstraction. If I want to swap out my O/R mapping layer I don't want to have to adopt a whole new set of APIs at the binding or transactional glue points. The meta framework needs to add value here, standards such as the common databinding proposed by JSR 227, are ideally placed to provide this type of plumbing.

I not dumb enough to suppose that swapping out is actually common within an individual application, but the point is the same skillset can be reused between projects or where a project has a heterogeneous set of providers. Abstraction generally is where meta-frameworks can add the most value because it leads into the next point - longevity.

Longevity - APIs change, this is a fact of life, Frameworks provide a level of abstraction on top of the base platform APIs and a degree of insulation from that change as a result. But frameworks change too with time, particularly active community driven ones. Meta-frameworks can add another layer of abstraction and programmer insulation on top of this shifting morass. You code to the Meta-framework and the plumbing in is handled for you, as the sands shift, the meta-framework adapts to that on your behalf. Can this work in reality? Well yes, certainly in the world of propriety frameworks we have environments like Oracle Forms which have persisted and evolved for almost 20 years, bringing code forward from VT terminals, through Windows GUIs to the Web, essentially unchanged, although perhaps enhanced as more capabilities appeared.

This then is the major carrot that meta-frameworks can offer to enterprise development shops - stability, but without stagnation. A meta-framework has to evolve and offer it's consumers the latest and greatest whilst at the same time maintaining a rock solid foundation for existing applications.


Tooling
- Meta-frameworks will often be based around both coded abstraction layers and a large amount of in-metadata configuration. As such having tools support is an important part of the picture. Tooling can add yet another layer of abstraction through alternative visualizations such as diagramming or GUI screen designers. This helps with the whole future proofing issue..
But Why Now?

Why should you believe that meta-frameworks have any traction? We've not really seen any SOA-like marketing buzz around such frameworks, no great vendor splashes. Well, I think now is the time because it's happening in a stealthy and underhand way in any case. If we ignore the vendors for a second and just look at the standards and trends: What's the big trend at the moment? - well POJOs and IOC, think EJB 3.0, think Spring, you name it - loose coupling in other words.

I also think that the JavaServer Faces (JSF) standard is also a key player here. JSF offers an abstracted UI definition and event handling model that can be run across multiple devices. That's a large chunk of meta-framework right there. If I learn JSF I can code for mainstream browsers, handhelds and even industrial telnet devices with a single skill set. That works for me!

Where Are The Meta-Frameworks?

We've see that Microsoft can do it with .NET, but they have the luxury of almost total control. Are fully fledged meta-frameworks possible in the open standards J2EE space?

Well yes I think it can be done, many frameworks aspire, but most of those that do so do not have the scope of a true meta-framework. Maybe they only support one UI technology, or only allow EJBs for persistence. That's not good enough, a framework must be adaptable and willing to evolve to drink the soup du jour, but of course do that in a balanced and supportable way. To date I'd say that to varying degrees, the Oracle ADF framework and the Spring framework are closest in exhibiting most of the essential meta-framework attributes. Keel is also out there in this space but is lacking traction and is unlikely to be that attractive to large enterprises.

Meta-frameworks then, have the potential to offer exactly what the large enterprise shops need: a certified technology stack with the flexibility to meet the majority of requirements, and the promise of a lifetime that matches the application being built with it. I think it's inevitable that meta-frameworks are in the the domain of the commercial vendors (and I include in the grouping the vendors operating on a Service basis for open source as well as the paid-for-product vendors). Maintaining a meta-framework is a long term and expensive commitment, it's going to have to be paid for, either through license costs on the framework itself, or through support/service costs. It's also got to be backed by companies that stand a chance of being around for the required timescales.
The vendors though, are out there and ready to jump into this space. The meta-frameworks are coming...

Sunday, February 12, 2006

Close all forms with one button

When you have multiform application with several open Form modules they are organized within one MDI parent runform window. It's standard behavior on Win32 that when user press 'Close' button on MDI parent window all open child windows starts to close. However, pressing 'Close' button on MDI parent Runform window will cause only the current form to be closed. In order to achieve similar functionality to standard one in MDI Runform session to we need to have following code in every involved form module;

WHEN-NEW-FORM-INSTANCE trigger:

default_value('false','global.closing');


WHEN-WINDOW-ACTIVATED trigger:

if :global.closing='true' then

exit_form;

end if;

Now, it is up to Forms developer from where he/she triggers this closing process. A special toolbar button or menu item for this purpose can be used with code:

:global.closing := 'true';

exit_form;

and all open forms within the current Forms MDI Runform parent window starts to close.


Friday, February 10, 2006

Using the COPY and NAME_IN built-in

Using the COPY and NAME_IN built-in functions in Forms isn't explained that well in the documentation. But they can be very useful in making your forms more generic by enabling you to build up field names dynamically and subsequently set and/or get the field values. In addition with the COPY function its possible to programatically insert non-numeric characters such as '%' into numeric fields, thus allowing wild-card searches to be performed.


NAME_IN

The name_in function allows you to get the value of a variable which itself is held as a variable name. Consider a form with two text fields on it - field1 and field2. Now into field1 enter the string 'Hello' and into field2 enter the string 'field1'. Now if you do a message(name_in(:field2)) it will display the string 'Hello'. As another example suppose you want to know what the value contained in the current form item is.
You look at the help and see that there is system variable called current_item. Great, I'll just message this out, however you'll soon discover that system.current_item is the name of the current item - not what it contains. To get at the value of the current item just enclose it within the name_in built-in - name_in(:system.current_item)


COPY


COPY is the complement of the NAME_IN function in that it allows you to set the value of a variable which itself is held as a variable. For example suppose you wish to set the value of the current item to the string 'Hello' - assuming the current item will be a text field. You can't simply do

:system.current_item:='Hello' ;

as Forms disallows this. But you can do a

COPY('Hello',:system.current_item);

Often you may have to dynamically create variables holding the name of fields on your form and set them to some value. Say you have 5 blocks - block1, block2 etc… all containing a text field of the same name - stock_id. To set the value of the stock_id field that the cursor is currently on you might use code like:-

COPY(' IBM',:system.current_block||'.stock_id');

One other use that COPY has is to place non-numeric characters into numeric fields programmatically during enter-query mode. Why would you want to do that? Mostly to allow the placing of wildcard characters such as '%'. You'll find that if you simply try:-

:num_field := '123%'

Forms will issue an error message

However doing COPY('123%',:num_field) works OK.

This only works during entry-query mode and you only need to use this if you have to enter the characters programmatically. You can just type in such characters normally if required.

Oracle Technical Interview Questions Answered

1. Explain the difference between a hot backup and a cold backup and the benefits associated with each.

A hot backup is basically taking a backup of the database while it is still up and running and it must be in archive log mode. A cold backup is taking a backup of the database while it is shut down and does not require being in archive log mode. The benefit of taking a hot backup is that the database is still available for use while the backup is occurring and you can recover the database to any point in time. The benefit of taking a cold backup is that it is typically easier to administer the backup and recovery process. In addition, since you are taking cold backups the database does not require being in archive log mode and thus there will be a slight performance gain as the database is not cutting archive logs to disk.

2. You have just had to restore from backup and do not have any control files. How would you go about bringing up this database?

I would create a text based backup control file, stipulating where on disk all the data files where and then issue the recover command with the using backup control file clause.

3. How do you switch from an init.ora file to a spfile?

Issue the create spfile from pfile command.

4. Explain the difference between a data block, an extent and a segment.

A data block is the smallest unit of logical storage for a database object. As objects grow they take chunks of additional storage that are composed of contiguous data blocks. These groupings of contiguous data blocks are called extents. All the extents that an object takes when grouped together are considered the segment of the database object.

5. Give two examples of how you might determine the structure of the table DEPT.

Use the describe command or use the dbms_metadata.get_ddl package.

6. Where would you look for errors from the database engine?

In the alert log.

7. Compare and contrast TRUNCATE and DELETE for a table.

Both the truncate and delete command have the desired outcome of getting rid of all the rows in a table. The difference between the two is that the truncate command is a DDL operation and just moves the high water mark and produces a now rollback. The delete command, on the other hand, is a DML operation, which will produce a rollback and thus take longer to complete.

8. Give the reasoning behind using an index.

Faster access to data blocks in a table.

9. Give the two types of tables involved in producing a star schema and the type of data they hold.

Fact tables and dimension tables. A fact table contains measurements while dimension tables will contain data that will help describe the fact tables.

10. . What type of index should you use on a fact table?

A Bitmap index.

11. Give two examples of referential integrity constraints.

A primary key and a foreign key.

12. A table is classified as a parent table and you want to drop and re-create it. How would you do this without affecting the children tables?

Disable the foreign key constraint to the parent, drop the table, re-create the table, enable the foreign key constraint.

13. Explain the difference between ARCHIVELOG mode and NOARCHIVELOG mode and the benefits and disadvantages to each.

ARCHIVELOG mode is a mode that you can put the database in for creating a backup of all transactions that have occurred in the database so that you can recover to any point in time. NOARCHIVELOG mode is basically the absence of ARCHIVELOG mode and has the disadvantage of not being able to recover to any point in time. NOARCHIVELOG mode does have the advantage of not having to write transactions to an archive log and thus increases the performance of the database slightly.

14. What command would you use to create a backup control file?

Alter database backup control file to trace.

15. Give the stages of instance startup to a usable state where normal users may access it.

STARTUP NOMOUNT - Instance startup

STARTUP MOUNT - The database is mounted

STARTUP OPEN - The database is opened

16. What column differentiates the V$ views to the GV$ views and how?

The INST_ID column which indicates the instance in a RAC environment the information came from.

17. How would you go about generating an EXPLAIN plan?

Create a plan table with utlxplan.sql.

Use the explain plan set statement_id = 'tst1' into plan_table for a SQL statement

Look at the explain plan with utlxplp.sql or utlxpls.sql

18. How would you go about increasing the buffer cache hit ratio?

Use the buffer cache advisory over a given workload and then query the v$db_cache_advice table. If a change was necessary then I would use the alter system set db_cache_size command.

19. Explain an ORA-01555

You get this error when you get a snapshot too old within rollback. It can usually be solved by increasing the undo retention or increasing the size of rollbacks. You should also look at the logic involved in the application getting the error message.

20. Explain the difference between $ORACLE_HOME and $ORACLE_BASE.

ORACLE_BASE is the root directory for oracle. ORACLE_HOME located beneath ORACLE_BASE is where the oracle products reside.

Well, we have gone through the first 25 questions as I would answer them during an interview. Please feel free to add your personal experiences to the answers as it will always improve the process and add your particular touch. As always remember these are "core" DBA questions and not necessarily related to the Oracle options that you may encounter in some interviews. Take a close look at the requirements for any job and try to come up with questions that the interviewer may ask. Next time we will tackle the rest of the questions. Until then, good luck with the process.


Solaris Logadm

Solaris 9 has added a new command called logadm which is useful in managing Oracle log files as well as any other log files that are used on your system. I don't know whether other flavours of UNIX have similar commands or not.

The command is '/usr/sbin/logadm'. The default configuration file is '/etc/logadm.conf', but a user configuration file can be specified.

The way I have used this is to create a cron job as below. This uses an Oracle specific config file.
====================
0 01 * * * "/usr/sbin/logadm -f /opt/apps/oracle/utils/logadm/logadm.conf"
====================

Then I created the config file as below. The configuration file must be left writable, as the logadm updates the command with the -P option to keep track of last processed timestamps
====================
#
/oracle/admin/GANDALF/bdump/alert_GANDALF1.log -C 3 -c -p 1w -z 1
/oracle/product/10.1.0/db/network/log/listener.log -C 3 -c -p 1w -z 1
/oracle/product/10.1.0/db/network/log/sqlnet.log -C 3 -c -p 1w -z 1
====================
Using the switch '-c' means that 'logadm' is able to process the listener.ora file without problems due to locking.

For further details consult the 'logadm' man page.

Thursday, February 09, 2006

Rollback Segments

Each time Oracle makes a change to schema data it records the information required to undo that change in a special type of database area called a rollback segment. This information is always kept at least until the transaction making the change has committed, but as soon as the transaction is complete its rollback or undo data can be overwritten. How soon this happens depends on how much undo space is available and how quickly current and future transactions create new undo records. Within a few seconds, or minutes, or hours the undo information will be overwritten or, in some cases, simply discarded. Since the introduction of Oracle Version 6 in 1988 the allocation of rollback segment space has been a major concern for Oracle DBA's who have had to decide both how many rollback segments an instance should have and how large each one should be. Resolving this issue has typically required a number of compromises that are outside the scope of this post.

Oracle9i supports the traditional rollback segment management features that have evolved over the past 13 years, but also introduces Automatic Undo Management. In this mode the DBA only has to create an "undo tablespace", tell Oracle to use this tablespace, and specify for how many seconds each undo record must be retained. The records will, of course, be kept for longer if the transaction that creates them does not commit within the time interval. In Oracle9i the following three instance parameters will guarantee that all undo entries will remain available for 15 minutes:

undo_management = AUTO
undo_retention = 900 # seconds
undo_tablespace = UNDOTBS

However a potentially unwanted side effect is that the Oracle server will not retain the data for much longer than the time specified even if the instance is running with a relatively light updating load i.e. even if there is no great demand to write new undo information. This contrasts markedly with traditional rollback segment management, where under light updating loads undo entries could (and would) remain available for several hours to generate read consistent data sometimes required by long running reports. Fortunately the instance parameter undo_retention can be altered dynamically using alter system set and this may become necessary at sites which have long report runs take place and cannot completely prevent update from occurring while these reports are running.

Wednesday, February 08, 2006

ADF JClient: Creating monolithic jar files to run JClient applications

The following information will save mylife :) I do not know where I got it (therefore sorry for not referencing it), but it would be useful if you think to make your code as an standalone jar file, having lots of different types of external libraries.

Within JDeveloper 10g it doesn't seem to be obvious on how to deploy a JClient application within a monolithic Java archive file that then can be used to run the JClient application on the client using java -jar The assumption that is made by the default deployment setting is that all dependency libraries are part of the classpath on the deployment platform, which means that only the application specific classes need to be deployed in the application's archive file. This assumption makes sense if you have more than one ADF JClient application that runs on the local client. To deploy JClient application so they run stand alone out of a jar file, with no additional setup required on the client machine, you need to add all the dependency classes to the application deployment jar, which quickly can become 17 MB in size due to this. Do as follows:

1) In the ADF JClient project, create a new JAR File deployment profile.

2) In the profile settings, for the JAR options, specify the name of the runnable class (to make a runnable jar file)

3) Create a new "Dependency Analysis" File group.

4) Select all the libraries from the Libraries tab on the Contributors page.

5) Make sure "Include Contents in Output" is checked. Otherwise the jar files of this libraries will be added, which is of no help at runtime.

6) Deploy the application to the jar file

7) Run the jar file, e.g: java -jar archive1.jar

Monday, February 06, 2006

Helpful hints for Designer

Process Modeler

  1. To display the Flow Name automatically, right click in the swim lane, select Customize Graphics, and click the Display Flow Name (On Create) checkbox.
  2. To increase/decrease the width of the swim lane, select the organizational unit, press shift + down arrow or shift + up arrow respectively.
  3. To change a process to a decision point and denote by using a diamond, right click in the swim lane, select customize graphics, select Decision Point from the drop down list, select Enhanced Symbol from the Mode group box. You will need to change each Process Modeler Diagram that will have a decision point.
  4. After a process, etc. is created, it is automatically added to the repository even if you have not saved the diagram. Therefore, if you make a mistake and the process, etc. should not be included, you must Delete it from the repository. If you Cut a process, etc. from a diagram, this merely removes it from the diagram, it does not delete it from the repository. Be careful how you use Cut and Delete.

Entity Relationship Diagram

  1. If your diagram is large and you would like to view only the entities and their relationships, you can opt not to display the attributes by selecting, Options> Customize from the menu bar. Click the Attributes checkbox in the View group box to remove the checkmark.
  2. When creating a relationship, to straighten the line between them, click the line and then use the up and down arrows or left and right arrows depending upon if the line is vertical or horizontal. Another option is to select Options>Customize from the menu bar and click the Snap checkbox in the Grid group box. You can also display the grid by clicking the Display checkbox in the Grid group box.
  3. To improve the readability and understanding of your relationships, you may need to create dog-legs or angled lines. After the initial relationship is created between two entities, you can click on intermediate points to create angled lines. To get the desired angle, you hold down the shift key and click the middle of the line to create a drawing point. You can then drag this drawing point to the desired spot. To remove the drawing point, press the shift key and click on the point again.

Repository Object Navigator (RON)

  1. To insert documents such as Word or email, from the RON navigator expand Reference Data Definition heading, highlight Documents and click the + (Create Object) button on the left side. The Document Properties window will open. Enter a document name, author, type, comment, and other information as necessary. Under the Documentation heading, single click the yellow icon located to the left of Document Text, a text pad window opens. Type information or copy and paste from another document and then save it.

Repository Reports

To generate a report with an html format that can be viewed by a client or non-Designer type person via a browser, select the required report in the navigator and then the Parameters Palette window will open. Set the Destination Type to be File, enter the LAN path into the Destination Name with .htm as the extension, change the Destination Format to html, and the Mode should be bitmap. Run the report.

Friday, February 03, 2006

Setup UNIX Sendmail to Access SMTP Gateway

The steps below are relevant to Sun SOLARIS Servers running Solaris 2.6 or 2.8, consult your System Administration manual for details on how to perform this for other hardware vendors / operating systems.

1. If the sendmail daemon is currently running on your system, terminate it with the following command:

/etc/init.d/sendmail stop

2. Copy /etc/mail/main.cf to /etc/mail/sendmail.cf

3. Edit /etc/hosts and place an entry here for the SMTP gateway machine, e.g:

1.2.3.4 mailhost

4. To test connectivity to the SMTP machine called mailhost, enter the following command:

telnet mailhost 25

This will initiate a telnet session with the mailhost machine on port 25, which is the port that the SMTP daemon listens for incoming messages.

5. Edit the /etc/mail/sendmail.cf file and edit the following entries:

Change Dmsmartuucp to Dmether

This changes the mailer program for remote mail delivery from uucp to the smtp mailer.

Change DR ddn-gateway to DR mailhost

Change CR ddn-gateway to CR mailhost

This changes the behavior of the sendmail daemon to route all remote mail generated from this server to be directed at the SMTP host you defined in /etc/hosts

6. Save the sendmail.cf configuration file

7. Start the sendmail daemon by issuing the following command:

/etc/init.d/sendmail start

8. Run $POM_TOP/bin/MasterScript.sh stop

9. Add /usr/lib to the PATH in the .profile file and activate it

10. Run $POM_TOP/bin/MasterScript.sh start apps/apps

Wednesday, February 01, 2006

tip : dbms_random

SELECT dbms_random.string('U', 2)||TRUNC(dbms_random.VALUE(1000, 9999))
FROM dual;

Output : au3910

SELECT dbms_random.string('U', 2)||TRUNC(dbms_random.VALUE(1, 9))||
dbms_random.string('U', 2)||TRUNC(dbms_random.VALUE(1,9))||
dbms_random.string('U', 2)
FROM dual;

Output : AI7KU4EE