Release V 5.2.0

Document Release Version

ProjectRelease DateVersion

Lern

27 MAR 23

V 5.2.0

Details of Released Tag

ComponentBuild Jenkins JobBuild TagsDeploy Jenkins JobDeploy TagsComment

Batch Service

Build/Core/Lms

Deploy/Kubernetes/Lms

Data pipeline

Build/Lern/FlinkJobs

Deploy/Lern/FlinkJobs

deploy the user-cache-updater-v2 flink job only

User&Org Service

Build/Core/Learner

Deploy/Kubernetes/Learner

Data Products

Build/Lern/LernDataProducts

Deploy/Lern/LernDataProducts

Cassandra Migration

Build/Core/Cassandra

Deploy/Kubernetes/Cassandra

SyncTool

Build/KnowledgePlatform/SyncTool

Deploy/KnowledgePlatform/Neo4jElasticSearchSyncTool

cmd: syncdialcodes Sample params:

--ids U7J3S8,R9Y6W5,Y3U3F1,D5C3D6,A7R6H3,J4F5V2,E1P7P2,Y5X5T7

SyncTool enhancement to be used by existing adopters for syncing "imageUrl" of DIAL codes to elastic search.

Summary of the Changes

  • Refactoring of Dial code dependency: An API was developed to fetch QR code image URLs and resolve relative paths from the DIAL service instead of the current connection to the Cassandra table.

  • API automation using Postman for P0 APIs

  • Movement of UserCache and UserCacheIndexer in Data Pipeline to Lern

  • Test Automation for CSP

  • Cassandra migration and grouping cql scripts with respect to keyspaces

Bug Fixes - click here to see the list of bugs fixed as a part of this release. Details of the Changes: LR-301 API automation using Postman for P0 APIs LR-302 Movement of UserCacheIndexer Data Product to Lern LR-303 Movement of UserCache in Data Pipeline to Lern LR-306 Test Automation for CSP LR-322 API automation using Newman for P0 APIs LR-325 BatchService: Refactoring of SB Lern Batch Service - DialCode Dependency LR-101 Cassandra migration and grouping cql scripts with respect to keyspaces LR-307 Setting up a complete testing env for Lern with all other BBs LR-122 Lern repo and pod name correction to match the component name

Env Configurations (Needs to be done before service deployment):

The below environment variable needs to be configured in the dev ops repo in 'sunbird_lms-service.env' file.

Variable NameValuesComments

sunbird_dial_service_base_url

To store the dial service base path

sunbird_dial_service_search_url

/dialcode/v3/search

To store the search url of the dial service

Note: Only For the adopters who are migrating from the previous versions to 5.2.0, run 'syncdialcodes' command in "Neo4jElasticSearchSyncTool" to sync "imageUrl" of dialcodes to Elastic Search. DIAL codes list can be fetched using below content search CURL:

curl --location --request POST '{{host}}/api/content/v1/search' \
--header 'Content-Type: application/json' \
--data-raw '{
    "request": {
        "filters": {
            "primaryCategory": "Course"
        },
        "exists": "dialcodes",
        "fields": ["dialcodes"],
        "limit": 10000
    }
}'

Note: Only For the adopters who are migrating from the previous versions to 5.2.0, need to follow the following steps:

  • Create the cassandra_migration_version and cassandra_migration_version_counts tables in respective keyspaces by using the below queries.

  • Replace <keyspace> with the below keyspace names,

sunbird_groups

sunbird_notifications

sunbird_courses

CREATE TABLE <keyspace>.cassandra_migration_version ( version text PRIMARY KEY, checksum int, description text, execution_time int, installed_by text, installed_on timestamp, installed_rank int, script text, success boolean, type text, version_rank int ) WITH bloom_filter_fp_chance = 0.01 AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment = '' AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'} AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND crc_check_chance = 1.0 AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 0 AND gc_grace_seconds = 864000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE';
CREATE TABLE <keyspace>.cassandra_migration_version_counts ( name text PRIMARY KEY, count counter ) WITH bloom_filter_fp_chance = 0.01 AND caching = {'keys': 'ALL', 'rows_per_partition': 'NONE'} AND comment = '' AND compaction = {'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 'max_threshold': '32', 'min_threshold': '4'} AND compression = {'chunk_length_in_kb': '64', 'class': 'org.apache.cassandra.io.compress.LZ4Compressor'} AND crc_check_chance = 1.0 AND dclocal_read_repair_chance = 0.1 AND default_time_to_live = 0 AND gc_grace_seconds = 864000 AND max_index_interval = 2048 AND memtable_flush_period_in_ms = 0 AND min_index_interval = 128 AND read_repair_chance = 0.0 AND speculative_retry = '99PERCENTILE';
  • To export cassandra_migration_version table COPY,

sunbird.cassandra_migration_version TO '/tmp/cassandra_migration_version.csv';
  • To import cassandra_migration_version table COPY

<keyspace>.cassandra_migration_version FROM '/tmp/cassandra_migration_version.csv';
  • To export cassandra_migration_version_count table COPY

sunbird.cassandra_migration_version TO '/tmp/cassandra_migration_version_count.csv';
  • To import cassandra_migration_version_count table COPY

<keyspace>.cassandra_migration_version_count FROM '/tmp/cassandra_migration_version_count.csv';
Name of the Flink Job added

user-cache-updater-v2

LR-303 - Movement of UserCache in Data Pipeline to Lern - setup/configuration details

Flink build Jenkins job name: /Build/job/Lern/job/FlinkJobs

Flink deploy Jenkins job name:

/Deploy/job/<environment>/job/Lern/job/FlinkJobs/user-cache-updater-v2

Data Product Configurations for Lern:

DataProduct Name

UserCacheIndexer

LR-302 Movement of UserCacheIndexer Data Product to Lern - setup/configuration details

Please define the below configuration in Dataproducts (lern-data-products/src/main/resources/application.conf) for the UserCacheIndexerJob data product to work,

redis.host=__redis_host__
redis.port="6379"
redis.connection.max=20
location.db.redis.key.expiry.seconds=3600
redis.connection.idle.max=20
redis.connection.idle.min=10
redis.connection.minEvictableIdleTimeSeconds=120
redis.connection.timeBetweenEvictionRunsSeconds=300
redis.max.pipeline.size="100000"
#CassandraToRedis Config
spark.cassandra.connection.host="localhost"
cassandra.user.keyspace="sunbird"
cassandra.user.table="user"
redis.user.database.index="12"
redis.user.input.index="4"
redis.user.backup.dir="src/mount/data/analytics/content-snapshot/redisbackup"
redis.scan.count="100000"
redis.user.index.source.key="id" # this will be used as key for redis
cassandra.read.timeoutMS="500000"
cassandra.query.retry.count="100"
cassandra.input.consistency.level="LOCAL_QUORUM"

Last updated