Release V 5.2.0

Document Release Version

Project
Release Date
Version

Lern

27 MAR 23

V 5.2.0

Details of Released Tag

Component
Build Jenkins Job
Build Tags
Deploy Jenkins Job
Deploy Tags
Comment

Batch Service

Build/Core/Lms

Deploy/Kubernetes/Lms

Data pipeline

Build/Lern/FlinkJobs

Deploy/Lern/FlinkJobs

deploy the user-cache-updater-v2 flink job only

User&Org Service

Build/Core/Learner

Deploy/Kubernetes/Learner

Data Products

Build/Lern/LernDataProducts

Deploy/Lern/LernDataProducts

Cassandra Migration

Build/Core/Cassandra

Deploy/Kubernetes/Cassandra

SyncTool

Build/KnowledgePlatform/SyncTool

Deploy/KnowledgePlatform/Neo4jElasticSearchSyncTool

cmd: syncdialcodes Sample params:

--ids U7J3S8,R9Y6W5,Y3U3F1,D5C3D6,A7R6H3,J4F5V2,E1P7P2,Y5X5T7

SyncTool enhancement to be used by existing adopters for syncing "imageUrl" of DIAL codes to elastic search.

Summary of the Changes

  • Refactoring of Dial code dependency: An API was developed to fetch QR code image URLs and resolve relative paths from the DIAL service instead of the current connection to the Cassandra table.

  • API automation using Postman for P0 APIs

  • Movement of UserCache and UserCacheIndexer in Data Pipeline to Lern

  • Test Automation for CSP

  • Cassandra migration and grouping cql scripts with respect to keyspaces

Bug Fixes - click here to see the list of bugs fixed as a part of this release. Details of the Changes: LR-301 API automation using Postman for P0 APIs LR-302 Movement of UserCacheIndexer Data Product to Lern LR-303 Movement of UserCache in Data Pipeline to Lern LR-306 Test Automation for CSP LR-322 API automation using Newman for P0 APIs LR-325 BatchService: Refactoring of SB Lern Batch Service - DialCode Dependency LR-101 Cassandra migration and grouping cql scripts with respect to keyspaces LR-307 Setting up a complete testing env for Lern with all other BBs LR-122 Lern repo and pod name correction to match the component name

Env Configurations (Needs to be done before service deployment):

The below environment variable needs to be configured in the dev ops repo in 'sunbird_lms-service.env' file.

Variable Name
Values
Comments

sunbird_dial_service_base_url

To store the dial service base path

sunbird_dial_service_search_url

/dialcode/v3/search

To store the search url of the dial service

Note: Only For the adopters who are migrating from the previous versions to 5.2.0, run 'syncdialcodes' command in "Neo4jElasticSearchSyncTool" to sync "imageUrl" of dialcodes to Elastic Search. DIAL codes list can be fetched using below content search CURL:

curl --location --request POST '{{host}}/api/content/v1/search' \
--header 'Content-Type: application/json' \
--data-raw '{
    "request": {
        "filters": {
            "primaryCategory": "Course"
        },
        "exists": "dialcodes",
        "fields": ["dialcodes"],
        "limit": 10000
    }
}'

Name of the Flink Job added

user-cache-updater-v2

LR-303 - Movement of UserCache in Data Pipeline to Lern - setup/configuration details

Flink build Jenkins job name: /Build/job/Lern/job/FlinkJobs

Flink deploy Jenkins job name:

/Deploy/job/<environment>/job/Lern/job/FlinkJobs/user-cache-updater-v2

Data Product Configurations for Lern:

DataProduct Name

UserCacheIndexer

LR-302 Movement of UserCacheIndexer Data Product to Lern - setup/configuration details

Please define the below configuration in Dataproducts (lern-data-products/src/main/resources/application.conf) for the UserCacheIndexerJob data product to work,

redis.host=__redis_host__
redis.port="6379"
redis.connection.max=20
location.db.redis.key.expiry.seconds=3600
redis.connection.idle.max=20
redis.connection.idle.min=10
redis.connection.minEvictableIdleTimeSeconds=120
redis.connection.timeBetweenEvictionRunsSeconds=300
redis.max.pipeline.size="100000"
#CassandraToRedis Config
spark.cassandra.connection.host="localhost"
cassandra.user.keyspace="sunbird"
cassandra.user.table="user"
redis.user.database.index="12"
redis.user.input.index="4"
redis.user.backup.dir="src/mount/data/analytics/content-snapshot/redisbackup"
redis.scan.count="100000"
redis.user.index.source.key="id" # this will be used as key for redis
cassandra.read.timeoutMS="500000"
cassandra.query.retry.count="100"
cassandra.input.consistency.level="LOCAL_QUORUM"

Last updated

Was this helpful?