- HZ Parties - Stores basic
information about parties and their relationship. Party ID is the primary
key. Party id, Party Number, Party Name and Party Type are key columns.
Establishes the Business Relationships.
- HZ Locations - Stores delivery
or postal address. Location ID is the primary key. Not to be used for
modeling party hierarchical relationships (use party relationships).
Identifies the Party and its Account. Location can cross Parties and a
Party can have multiple Locations. Account (beneath the Party) inherits
the Locations defined for the Party.
- HZ Party Sites - Table that
links HZ Parties with HZ Locations. Party Site ID is the primary key.
- HZ Cust Accounts - Different
instances of Business Relationships (Selling, etc). Multiple Accounts can
exist for a Party. Cust Account ID is the primary key.
- HZ Cust Acct Sites All - Site
reference to the Account . Partitioned by Org Id. Primary Key is Cust Acct
Site Id.
- HZ Cust Site Uses All - Stores
business purpose usage (Bill To, Ship To, etc) for the Customer Account.
Site Use Id is the Primary Key. Integrated with HZ Cust Acct Sites All.
- HZ Customer Profiles - Stores
credit characteristics of Party, Account, Site.
- HZ Party Relationships - Stores
relationships between parties.
- HZ Cust Profile Classes - Stores credit characteristics that is common across a group of Accounts
Apps Tech Notes
Wednesday, 19 November 2014
TCA Architecture
Monday, 17 November 2014
HowTo: Run the .sh File Shell Script In Linux / UNIX
sh file is nothing but the shell script to install given application or to perform other tasks under UNIX like operating systems. The easiest way to run .sh shell script in Linux or UNIX is to type the following commands. Open the terminal (your shell prompt) and type the command:
sh file.sh
OR
bash file.sh
.sh File As Root User
Sometime you need to install application which requires root level privalage
Some time you need root access to install application; without root, you won't have the necessary permissions to install application or make system level modifications. Root access is disabled by default on many Linux and UNIX like systems. Simply use sudo or su as follows:
Some time you need root access to install application; without root, you won't have the necessary permissions to install application or make system level modifications. Root access is disabled by default on many Linux and UNIX like systems. Simply use sudo or su as follows:
sudo bash filename.sh
Type your password. Another option is to use the su command as follows to become superuser:
su -
Type root user password and finally run your script:
bash filename.sh
chmod Command: Run Shell Script In Linux
Another recommend option is to set an executable permission using the chmod command as follows:
chmod +x file.sh
Now your can run your .sh file as follows
./file.sh
Unix Commands
Basic Shell Script Commands
# Create Directory
mkdir <dirname>
# Remove Directory
rmdir <dirname>
#remove folder with files
rm -r -f <dirname>
# Change Directory
cd <newpath>
# Create new file
vi <newfile.ext>
#insert data into file
vi <openfilename.ext>
esc i <make changes>
#Save file
esc :wq enter
# exit without saving changes
esc :q! enter
# open existing file
vi <existingfilename.ext>
#remove file
rm <filename.ext>
# copy file with same name
cp <sourcedir>/<sourcefilename.ext> <destinationdir>
# copy file with new name
cp <sourcedir>/<sourcefilename.ext> <destinationdir>/<newfilename.ext>
# Move file with same name
mv <sourcedir>/<sourcefilename.ext> <destinationdir>
# move file with data appended to filename in the front
mv <sourcedir>/<sourcefilename.ext> <destinationdir>/`date+%H%M%d%m%y`<filename.ext>
#print line
echo “your text here to print”
#print date
echo `date`
#To See Lines
tail <File Name> Lines
#All Rights
# Create Directory
mkdir <dirname>
# Remove Directory
rmdir <dirname>
#remove folder with files
rm -r -f <dirname>
# Change Directory
cd <newpath>
# Create new file
vi <newfile.ext>
#insert data into file
vi <openfilename.ext>
esc i <make changes>
#Save file
esc :wq enter
# exit without saving changes
esc :q! enter
# open existing file
vi <existingfilename.ext>
#remove file
rm <filename.ext>
# copy file with same name
cp <sourcedir>/<sourcefilename.ext> <destinationdir>
# copy file with new name
cp <sourcedir>/<sourcefilename.ext> <destinationdir>/<newfilename.ext>
# Move file with same name
mv <sourcedir>/<sourcefilename.ext> <destinationdir>
# move file with data appended to filename in the front
mv <sourcedir>/<sourcefilename.ext> <destinationdir>/`date+%H%M%d%m%y`<filename.ext>
#print line
echo “your text here to print”
#print date
echo `date`
#To See Lines
tail <File Name> Lines
#All Rights
chmod 777 <File Name> |
Friday, 14 November 2014
Using Parallel Execution
https://docs.oracle.com/cd/B10501_01/server.920/a96520/tuningpe.htm
Types of Parallelism
The following types of parallelism are discussed in this section:
Example 21-4 Parallelizing INSERT ... SELECT
Add the new employees who were hired after the acquisition of
ACME
.INSERT /*+ PARALLEL(EMP) */ INTO employees SELECT /*+ PARALLEL(ACME_EMP) */ * FROM ACME_EMP;
Example 1 Parallelizing UPDATE and DELETE
Give a 10 percent salary raise to all clerks in Dallas.
UPDATE /*+ PARALLEL(EMP) */ employees SET SAL=SAL * 1.1 WHERE JOB='CLERK' AND DEPTNO IN (SELECT DEPTNO FROM DEPT WHERE LOCATION='DALLAS');
The
PARALLEL
hint is applied to the UPDATE
operation as well as to the scan.Example 2 Parallelizing UPDATE and DELETE
Remove all products in the grocery category because the grocery business line was recently spun off into a separate company.
DELETE /*+ PARALLEL(PRODUCTS) */ FROM PRODUCTS WHERE PRODUCT_CATEGORY ='GROCERY';
Again, the parallelism is applied to the scan as well as
UPDATE
operation on table employees
.Incremental Data Loading in Parallel
Parallel DML combined with the updatable join views facility provides an efficient solution for refreshing the tables of a data warehouse system. To refresh tables is to update them with the differential data generated from the OLTP production system.
In the following example, assume that you want to refresh a table named
customer
that has columns c_key
, c_name
, and c_addr
. The differential data contains either new rows or rows that have been updated since the last refresh of the data warehouse. In this example, the updated data is shipped from the production system to the data warehouse system by means of ASCII files. These files must be loaded into a temporary table, named diff_customer
, before starting the refresh process. You can use SQL*Loader with both the parallel and direct options to efficiently perform this task. You can use the APPEND
hint when loading in parallel as well.
Once
diff_customer
is loaded, the refresh process can be started. It can be performed in two phases or by merging in parallel, as demonstrated in the following:- Updating the Table in Parallel
- Inserting the New Rows into the Table in Parallel
- Merging in Parallel
Updating the Table in Parallel
The following statement is a straightforward SQL implementation of the update using subqueries:
UPDATE customers SET(c_name, c_addr) = (SELECT c_name, c_addr FROM diff_customer WHERE diff_customer.c_key = customer.c_key) WHERE c_key IN(SELECT c_key FROM diff_customer);
Unfortunately, the two subqueries in this statement affect performance.
An alternative is to rewrite this query using updatable join views. To do this, you must first add a primary key constraint to the
diff_customer
table to ensure that the modified columns map to a key-preserved table:CREATE UNIQUE INDEX diff_pkey_ind ON diff_customer(c_key) PARALLEL NOLOGGING; ALTER TABLE diff_customer ADD PRIMARY KEY (c_key);
You can then update the
customers
table with the following SQL statement:UPDATE /*+ PARALLEL(cust_joinview) */ (SELECT /*+ PARALLEL(customers) PARALLEL(diff_customer) */ CUSTOMER.c_name AS c_name CUSTOMER.c_addr AS c_addr, diff_customer.c_name AS c_newname, diff_customer.c_addr AS c_newaddr WHERE customers.c_key = diff_customer.c_key) cust_joinview SET c_name = c_newname, c_addr = c_newaddr;
The base scans feeding the join view
cust_joinview
are done in parallel. You can then parallelize the update to further improve performance, but only if the customer
table is partitioned.Inserting the New Rows into the Table in Parallel
The last phase of the refresh process consists of inserting the new rows from the
diff_customer
temporary table to the customer
table. Unlike the update case, you cannot avoid having a subquery in the INSERT
statement:INSERT /*+PARALLEL(customers)*/ INTO customers SELECT * FROM diff_customer s);
However, you can guarantee that the subquery is transformed into an anti-hash join by using the
HASH_AJ
hint. Doing so enables you to use parallel INSERT
to execute the preceding statement efficiently. Parallel INSERT
is applicable even if the table is not partitioned.Merging in Parallel
In Oracle9i, you combine the previous updates and inserts into one statement, commonly known as a merge. The following statement achieves the same result as all of the statements in "Updating the Table in Parallel" and "Inserting the New Rows into the Table in Parallel":
MERGE INTO customers USING diff_customer ON (diff_customer.c_key = customer.c_key) WHEN MATCHED THEN UPDATE SET (c_name, c_addr) = (SELECT c_name, c_addr FROM diff_customer WHERE diff_customer.c_key = customers.c_key) WHEN NOT MATCHED THEN INSERT VALUES (diff_customer.c_key,diff_customer.c_data);
FIRST_ROWS(n) Hint
Starting with Oracle9i, a hint calledFIRST_ROWS(n)
, where n is a positive integer was added. This hint enables the optimizer to use a new optimization mode to optimize the query to return n rows in the shortest amount of time. Oracle Corporation recommends that you use this new hint in place of the oldFIRST_ROWS
hint for online queries because the new optimization mode may improve the response time compared to the old optimization mode.Use theFIRST_ROWS(n)
hint in cases where you want the first n number of rows in the shortest possible time. For example, to obtain the first 10 rows in the shortest possible time, use the hint as follows:SELECT /*+ FIRST_ROWS(10) */ article_id FROM articles_tab WHERE CONTAINS(article, 'Oracle')>0 ORDER BY pub_date DESC;
Tuesday, 11 November 2014
Sql Script for Running from Back end Scripts
Set serveroutput on;
Set timing on;
Set echo on;
Set feedback on;
spool xxx.lst
.
.
.
COMMIT;
spool off;
exit;
Set timing on;
Set echo on;
Set feedback on;
spool xxx.lst
.
.
.
COMMIT;
spool off;
exit;
Unix Nohup: Run a Command or Shell-Script Even after You Logout
Unix Nohup: Run a Command or Shell-Script Even after You Logout
When you execute a Unix job in the background ( using &, bg command), and logout from the session, your process will get killed. You can avoid this using several methods — executing the job with nohup, or making it as batch job using at, batch or cron command.
This quick tip is for beginners. If you’ve been using nohup for a while, leave us a comment and tell us under what situations you use nohup.
This quick tip is for beginners. If you’ve been using nohup for a while, leave us a comment and tell us under what situations you use nohup.
In this quick tip, let us review how to make your process running even after you logout, using nohup.
Nohup stands for no hang up, which can be executed as shown below.
nohup syntax:
# nohup command-with-options &
Nohup is very helpful when you have to execute a shell-script or command that take a long time to finish. In that case, you don’t want to be connected to the shell and waiting for the command to complete. Instead, execute it with nohup, exit the shell and continue with your other work.
Explanation about nohup.out file
By default, the standard output will be redirected to nohup.out file in the current directory. And the standard error will be redirected to stdout, thus it will also go to nohup.out. So, your nohup.out will contain both standard output and error messages from the script that you’ve executed using nohup command.
Instead of using nohup.out, you can also redirect the output to a file using the normal shell redirections.
Example: Printing lines to both standard output & standard error
while(true) do echo "standard output" echo "standard error" 1>&2 sleep 1; done
Execute the script without redirection
$ nohup sh custom-script.sh & [1] 12034 $ nohup: ignoring input and appending output to `nohup.out' $ tail -f nohup.out standard output standard error standard output standard error ..
Execute the script with redirection
$ nohup sh custom-script.sh > custom-out.log & [1] 11069 $ nohup: ignoring input and redirecting stderr to stdout $ tail -f custom-out.log standard output standard error standard output standard error ..
If you log-out of the shell and login again, you’ll still see the custom-script.sh running in the background.
$ ps aux | grep sathiya sathiya 12034 0.0 0.1 4912 1080 pts/2 S 14:10 0:00 sh custom-script.sh
Parallel Hint
SELECT /*+parallel(X,10)*/
COUNT(*)
FROM OE_ORDER_HEADERS_ALL X
WHERE org_id = 100
AND creation_date >= '01-DEC-2001'
Parallel SQL enables a SQL statement to be processed by multiple threads or processes simultaneously.
COUNT(*)
FROM OE_ORDER_HEADERS_ALL X
WHERE org_id = 100
AND creation_date >= '01-DEC-2001'
Parallel SQL enables a SQL statement to be processed by multiple threads or processes simultaneously.
Understanding Parallel SQL
In a serial—nonparallel—execution environment, a single process or thread undertakes the operations required to process your SQL statement, and each action must complete before the succeeding action can commence. The single Oracle process might only leverage the power of a single CPU and read from a single disk at any given instant. Because most modern hardware platforms include more than a single CPU and because Oracle data is often spread across multiple disks, serial SQL execution cannot take advantage of all the available processing power.
For instance, consider the following SQL statement:
SELECT *
FROM sh.customers
ORDER BY cust_first_name, cust_last_name, cust_year_of_birth
FROM sh.customers
ORDER BY cust_first_name, cust_last_name, cust_year_of_birth
If executing without the parallel query option, a single process would be responsible for fetching all the rows in the CUSTOMERS table. The same process would be responsible for sorting the rows to satisfy the ORDER BY clause. Figure 13-1 illustrates the workflow.
We can request that Oracle execute this statement in parallel by using the PARALLEL hint:
SELECT /*+ parallel(c,2) */ *
FROM sh.customers c
ORDER BY cust_first_name, cust_last_name, cust_year_of_birth
FROM sh.customers c
ORDER BY cust_first_name, cust_last_name, cust_year_of_birth
If parallel processing is available, the CUSTOMERS table will be scanned by two processes in parallel. A further two processes will be employed to sort the resulting rows. A final process—the session that issued the SQL in the first place—combines the rows and returns the result set. The process that requests and coordinates the parallel processing stream is the Query coordinator. Figure 13-2 illustrates this sequence of events.
Figure 13-1 Serial execution of a SQL statement.
Oracle supports parallel processing for a wide range of operations, including queries, DDL, and DML:
- Queries that involve table or index range scans
- Bulk insert, update, or delete operations
- Table and index creation
- The collection of object statistics using DBMS_STATS (see Chapter 7, "Optimizing the Optimizer")
- Backup and recovery operations using Recovery Manager (RMAN)
The Degree of Parallelism (DOP) defines the number of parallel streams of execution that will be created. In the simplest case, this translates to the number of parallel slave processes enlisted to support your SQL's execution. However, the number of parallel processes is more often twice the DOP. This is because each step in a nontrivial execution plan needs to feed data into the subsequent step, so two sets of processes are required to maintain the parallel stream of processing.
Figure 13-2 Parallel Execution.
For instance, if the statement includes a full table scan, an ORDER BY and a GROUP BY, three sets of parallel processes are required: one to scan, one to sort, and one go group. Because Oracle reuses the first set of parallel processes (those that performed the scan) to perform the third operation (the GROUP BY), only two sets of processes are required. As a result of this approach, the number of parallel slaves allocated should never be more than twice the DOP.
Figure 13-3 shows how parallel slaves are allocated for a DOP of 2.
Parallel slave pool
The Oracle server maintains a pool of parallel slave processes available for parallel operations. The database configuration parameters PARALLEL_MIN_ SERVERS
Figure 13-3 Parallel process allocation for a DOP of 2.
and PARALLEL_MAX_SERVERS determine the initial and maximum size of the pool. If insufficient slaves are currently active but the pool has not reached its maximum value, Oracle will create more slaves. After a configurable period of inactivity, slave processes will shut down until the pool is again at its minimum size.
If there are insufficient query processes to satisfy the DOP requested by your statement, one of the following outcomes results:
- If there are some parallel query slaves available, but less than requested by your SQL statement, your statement might run at a reduced DOP.
- If there are no parallel query slaves available, your statement might run serially.
- Under specific circumstances, you might get an error. This will only occur if the database parameter PARALLEL_MIN_PERCENT has been set to a value that is higher than the percentage of required slaves that are available.
- In Oracle 11g Release 2 and forward, your SQL execution might be delayed until sufficient parallel servers are available.
See the "Parallel Configuration Parameters" section later in this chapter for more information on how to configure these outcomes.
Parallel Query IO
We discussed in Chapter 2, "Oracle Architecture and Concepts," and elsewhere, how the Oracle buffer cache helps reduce disk IO by buffering frequently accessed data blocks in shared memory. Oracle has an alternate IO mechanism, direct path IO, which it can use if it determines that it would be faster to bypass the buffer cache and perform the IO directly. For instance, Oracle uses direct IO when reading and writing temporary segments for sorting and intermediate result sets. In Oracle 11g onward, Oracle sometimes uses direct path IO in preference to the normal buffered IO for serial table access as well.
When performing Parallel query operations, Oracle normally uses direct path IO. By using direct path IO, Oracle avoids creating contention for the buffer cache and allows IO to be more optimally distributed between slaves. Furthermore, for parallel operations that perform full table scans the chance of finding matching data in the buffer cache is fairly low, so the buffer cache adds little value.
In Oracle 10g and earlier, parallel query always uses direct path IO, and serial query will always use buffered IO. In 11g, Oracle can use buffered IO for parallel query (from 11g release 2 forward), and serial queries might use direct path IO. However, it remains true that parallel queries are less likely to use buffered IO and might, therefore, have a higher IO cost than serial queries. The higher IO cost will, of course, be shared amongst all the parallel processes so the overall performance might still be superior.
Direct path and buffered IO are discussed in more detail within Chapter 21, "Disk IO Tuning Fundamentals."
Parallel performance gains
The performance improvements that you can expect to obtain from parallel SQL depend on the suitability of your host computer, Oracle configuration, and the SQL statement. If all the conditions for parallel processing are met, you can expect to get substantial performance improvements in proportion to the DOP employed.
On many systems, the limit of effective parallelism will be determined by segment spread, not by hardware configuration. For instance, if you have 32 CPUs and 64 independent disk devices, you might hope for effective parallelism up to at least a DOP of 32 or maybe even 64. However, if the table you are querying is spread over only 6 disks, you are likely to see performance improvements reduce as you increase the DOP beyond 6 or so.
Figure 13-4 illustrates the improvements gained when increasing the DOP for a SQL statement that performs a table scan and GROUP BY of a single table.
Figure 13-4 Improvement gains for various DOPs on various host configurations.
The host configurations shown are
- An Amazon CPU-intensive Extra Large EC2 image. This is a virtual server running in Amazon's AWS cloud that has the equivalent of 8 _ 2.5-GHz CPUs and has storage on a widely striped SAN.
- An Amazon CPU-intensive Medium EC2 image. This is similar to the extra large image, but has only 2 CPUs.
- A Dell 6800 4 CPU server with disk storage on a widely striped SAN using ASM.
- A Dell latitude D830 laptop (my laptop). It is dual core, but all data files are on a single disk.
In each case, the parallel SQL was the only SQL running.
These examples show that for suitably configured systems, performance gains were greater the more CPUs that were available. However, attempting to use parallel on a host that is unsuitable (as in my laptop) is futile at best and counter-productive at worst.
The performance gains achieved through parallal processing are most dependent on the hardware configuration of the host. To get benefits from parallel processing, the host should possess multiple CPUs and data should be spread across multiple disk devices.
Subscribe to:
Posts (Atom)