This guide helps to deploy in data-products in server.
Setup and execution of data-products in the server
Each data-product is an independent spark job that runs in a spark-submit mode for generating reports and data migrations. So it requires, all the data sources and dependency libraries to be present before executing data-product.
Building data-product
Job Path: Build/Lern/LernDataProducts
Deploying data-product
Job Path: Deploy/{{env}}/Lern/LernDataProducts
Params:
module - this parameter used to deploy respective process of data-products deployment
lern-dataproducts - to deploy data-products
lern-dataproducts-spark-cluster - to deploy data-products in spark cluster (such as hd insight cluster)
cronjobs - to update cronjobs in server
remote - to which spark server to deploy the above module
Cron jobs
Data-product is running in demon mode which is getting triggered based on schedule by using cronjobs.
Provisioning Postgres DB for exhaust job execution
In data-products, exhaust jobs are using job_request table from postgres DB for maintaining the exhaust job requests.
Data-product is running in server using cronjobs. For development and testing purpose, below Jenkins job can be used to trigger the job with respective job id.