Skip to content

Test cases for Nightly Build

sriharshaboda edited this page Mar 22, 2016 · 12 revisions

#####App Store a)Go to the Job Management -> App store. Go to any desired domain and choose any app, click to install the app. Once it shows the message that 'App installed successfully', verify that the new job is registered properly in BDRE metadata.

b)Also you may check in the ${user.home}/bdre-wfd/ (Eg: /home/cloudera/bdre-wfd) to verify whether the corresponding app content has been copied successfully or not.

#####Job export a)Go to Metadata management -> Job Definitions -> Processes and choose a process and click on the 'export' button against each process.

b)Choose Download zip option and verify that the download is successful and the downloaded zip contains process json and any related content.

c)In the export screen, instead of Download zip, choose 'Export to App Store' and verify that on clicking it, an entry is made in Job Management -> App Deployment page against the current app. You may also login as admin and check whether merge and reject are working properly

#####Job import a)Download the zip of any app by exporting it (you may try to obtain the zip from a different environment) and navigate to Job management -> Job Import Wizard, and upload the zip. Once import is successful, you may check the process page to verify if a new app/process is registered and the corresponding content is uploaded in ${user.home}/bdre-wfd/ just as in the case of app store install.

#####Workflow Creator a)Navigate to Metadata management -> Job Definitions -> Workflow Creator and try to create a workflow graphically. Add different sub processes of different kinds like Hive, Map reduce, Shell, Spark, R etc. and configure each job by providing properties. Make sure the job is created successfully in the process page.

b)After creating the workflow, go to the process page and click on the 'display' button to check the process pipeline, followed by oozie xml and diagram.

#####Process Deployment a)Choose any process from the process page and click on the deploy button. Then, either navigate to Job Management -> Process Deployment and ensure a new record is added to the queue, or make sure the deploy status of the process in the process page keeps updating till the job is successfully deployed.

b)If the deployment fails, check the logs and make sure the deploy scripts are working properly for different types of jobs.

#####Process Execution a)Once the job deployment is successful for a given job, click on the execute button against each job to make sure the job is launched successfully (check both oozie and stand alone jobs)

#####RDBMS Ingestion (with data load enabled) a)Create the job from Data Ingestion -> Import from RDBMS. Input all the database credentials etc. and choose a table from the listed ones and choose a few columns and select the corresponding data types and try with different increment modes. Check if the file exported from RDBMS table is registered in BDRE metadata's file table.

b)Once the import is done, launch the data load job and check if tables are created and populated in raw and base (or corresponding) databases in hive.

#####Data Load a)Create an upstream job which gives a file or batch to this data load job. File can be a json/XML/Delimited/Mainframe. Try loading data into hive from the above different formats under Data Ingestion-> Load File in Hive

b)Verify the corresponding table partitions from hive. Repeat the job with different input file/batch to observe the incremental load of data through different partitions.

#####Web crawler a)Provide a URL to crawl and choose the output HDFS directory while creating the job under Data Ingestion -> Web Crawl & Ingest. Run the job and test the output in HDFS under the configured directory.

#####Monitor Directory & Ingestion a)Provide the local directory to be monitored on and provide the output HDFS directory while creating the job under Data Ingestion -> Monitor Directory & Ingest. Run the job & test the output in HDFS under the configured directory as data in local keeps getting added to HDFS and check in BDRE metadata File table whether the files are registered or not.

#####Test data generation a)Configure the job with different datatypes like date,string,number with multiple columns and check if the order of columns is maintained in the final HDFS file.

#####Data Quality a)Ensure drools is setup and corresponding rule is created.

b)Create the job under Data Ingestion -> New DQ Job, by providing the necessary rule credentials, package and other configuration like threshold. Run the job and ensure the good file is registered in the file table, if DQ succeeded.

c)Also make sure Process-Logs are registered for this DQ job

#####Twitter ingestion a)Create the job under Data Ingestion -> Ingest From Streams. Choose Source as Twitter, channel as Memory, Sink as HDFS. Also input your Twitter security token information while configuring the job. if you do not have the required security tokens, you may get a copy of them here: https://apps.twitter.com

b)Ensure that files are being registered in the BDRE metadata File table as the Flume Twitter Ingestion keeps running.

Clone this wiki locally