Translate

Monday, 13 February 2012

OBIEE 11g NQCMD

NQCMD on Linux




If your bash profile does not include variables needed then you will have to run this script first

. $MW_HOME/instances/instance1/bifoundation/OracleBIApplication/coreapplication/setup/bi-init.sh


To start nqcmd run the command
$MW_HOME/Oracle_BI1/bifoundation/server/bin/nqcmd


Run manual Commmands with option sql queries (Q)
nqcmd -d AnalyticsWeb -u weblogic -p mypassword

Sql Commands
To clear Cache : Call SAPurgeAllCache()

Other information:

1. nqcmd -dmy_dsn -umy_username [-pmy_password] -ssql_input_file -omy_result_file


2.    You can pass a text file with SQL statements to the utility (script mode), or you can enter SQL at the command line (interactive mode). Queries are run against the default subject area, unless the object names used in the query are fully qualified.


Command-Line Arguments for nqcmd
Argument
Description
-?
Lists the available command-line arguments.
-ddata_source_name
The ODBC data source name for the Oracle BI Server to which you want to connect.
If you omit this parameter, you are prompted at the command line to enter the DSN.
Tip: On Windows, you can see the available local ODBC data source names by going to Control Panel > Administrative Tools > Data Sources (ODBC). Click the System DSN tab to see a list of the available DSNs (for example, AnalyticsWeb_coreapplication).
-uuser_name
A valid Oracle Business Intelligence user name.
-ppassword
The corresponding Oracle Business Intelligence user password.
The password argument is optional. If you do not provide a password argument, you are prompted to enter a password when you run the command. To minimize the risk of security breaches, Oracle recommends that you do not provide a password argument either on the command line or in scripts. Note that the password argument is supported for backward compatibility only, and will be removed in a future release.
-ssql_input_file_name
The name and path of a text file that includes your test SQL queries.
-ooutput_result_file_name
The name and path of a file to which the utility will write the query results. This option is only used with -s.
-Ddelimiter
The delimiter used in the SQL input file (for example, semicolon (;) or colon (:)). This option is only used with -s.
-a
Enables asynchronous processing.
This option is typically used with -s, when you are passing a SQL input file with multiple SQL statements.
-z
Enables UTF8 output instead of ACP in the output result file.
You might need to include this option to display international characters in query results.
-utf16
Enables UTF16 instead of ACP for communication between nqcmd and the Oracle BI ODBC driver.
You might need to include this option to display international characters in query results.
-NotForwardCursor
Disables the ODBC forward only cursor.
Including this argument overrides the setting specified in the ODBC DSN.
-v
Displays the version of the nqcmd utility.
-SessionVar session_variable_name=session_variable_value
Includes the specified session variable and sets it to the specified value.


How to Write the Create Aggregates Specification

All metadata names (except for logical fact columns) are fully qualified. There are two modes of operation: Create and Delete. It is strongly recommended that you place all aggregate specifications under a single Create Aggregates statement.

Follow these guidelines when writing the aggregate specification:
·        Begin the script file with a Delete statement. It is essential to delete system-generated aggregates before creating new ones. This ensures that data is consistent and removes invalid or incomplete aggregates before you run the Create operation. The following statement is the syntax for deleting aggregates:
·         Delete aggregates [list of fully qualified physical table names];
For example:
Delete aggregates "src".."INCR"."fact_1", "src".."INCR"."fact_2";

You can optionally include a comma-separated list of physical tables to delete. The tables you include must be system-generated (by a previous run of the aggregate creation script). Any dimension tables joined to listed fact tables are also deleted.
Note that if a dimension table is joined to more than one fact table, it will not be deleted unless the other joined table is also listed.
In addition to fact tables, you can also use the Delete statement to delete orphan dimension tables (that is, dimension tables that are not joined to any other fact table). Orphan dimension tables sometimes occur when aggregate creation fails.


·    The next statement should be a Create statement. The following is the syntax for creating aggregates:
·         Create|Prepare aggregates
·         aggr_name_1
·         for  logical_fact_table_1 [(logical_fact_column_1, logical_fact_column_2,…)]  
·         at levels (level_1, level_2, …)
·         using connection pool connection_pool_name_1
·         in schema_name_1
·         [ ,aggr_name_2
·         for logical_fact_table_3 [(logical_fact_column_5, logical_fact_column_2,…)]  
·         at levels (level_3, level_2, …)
·         using connection pool connection_pool_name_2
·         in schema_name_2] ;
·        To specify multiple aggregates in a single Create Aggregates statement, follow these guidelines:


o   Ensure that each of the multiple aggregate specifications are separated by a comma, and the entire aggregate creation script is terminated with a semicolon.
o   In this file, only one Delete Aggregates statement should be specified at the beginning. Make sure that only one delete is issued per ETL run (unless a reset is called for).
Caution:
Any aggregate scripts that are run after the first one should not have a Delete Aggregates statement, or all previously created aggregates are removed.

 

No comments:

Post a Comment

Google Big Query Clone / Copy DataSets / Tables - PROD --> TEST --> DEV Projects

Google Big Query Cloning of Datasets & Tables across GCS Projects. We searched the internet and could not find a simple cloning / ...