Product Release Notes

9.0

May 23rd, 2024

Platform

New Features

  1. Repository Database conversion to MongoDB (with or without SSL).
  2. Admin Module:
    1. Single Sign-On (SSO): Implemented Tenant-specific SSO configuration for seamless and secure access to the platform.
    2. Execution Settings: Users can now process data through execution settings for Data as API and VCS Git Pull/Push operations.
    3. Added new Key “CA File” in the SSL Certificate Settings.
    4. Support for MongoDB SSL Connection configured in the Datastore Settings.
  3. Data Center: The Job Status has been added to the Data Store list. It indicates the running status of the data store load job.
  4. Security:
    1. A new module icon has been provided for the Security module.
    2. The security module now displays the email ID along with the full name of the user.

Data Catalog Search

New Features

  1. Homepage: Introducing a homepage for the Data Catalog Search.
  2. Refresh Catalog: Provided the Refresh Catalog icon for the Admin role users to refresh the Data Catalog list and bring in the latest updates committed by platform users to all the Data asset types.
  3. Data Catalog crawling is done from the Repo MongoDB.
  4. Lineage Structure Creation: Added functionality to create lineage structures with connecting assets.
  5. Change History Tracking: Introduced a History tab to track changes in the selected data asset over a period of time.
  6. Data Import Services: Implemented services to retrieve data from lineage collections.
  7. Job Management: Created jobs for crawling, data profiling, and history tracking.
  8. Data Profiling: Enabled data profiling to be displayed on the user interface for datasets.
  9. Data Pipeline Lineage: Implemented data pipeline lineage structures with sources and destinations.
  10. Project Status: Users can display and update the status of a specific project for a selected asset type. The available status options are Work in Progress, Verified, or Published to indicate the progress of their work.
  11. Each Asset Type displays three standard tabs: Details , Lineage , and History . Additionally, some asset types have specific tabs:
    1. Data Set: Columns, Sample Data, and Data Profile
    2. Feature Store: Sample Data
    3. Data Store: Columns
  12. Description: The Admin role users and owners of the asset can now add descriptions to their Data Catalog searches.
  13. Tags: The Admin role users can insert tags to assets to enhance searchability for other users.

Data Preparation

New Features

  1. New Transforms: Added the following transformations to the Data Preparation module.
    1. ML Transforms
      1. Convert value to column (One Hot encoding)
      2. Discretize values
      3. Expanding window transform
      4. Feature agglomeration
      5. encoding
      6. Label encoding
      7. Lag Transformation
      8. Leave one out encoding
      9. Principal component analysis
      10. Rolling data
      11. Singular Value decomposition
      12. Target-based quantile encoding
      13. Target encoding
      14. Weight of evidence encoding
    2. Functions Transforms
      1. Formula Based Transformation
  2. Message Snackbars: Provided close option in the error/success message snack bars for users to close them.
  3. Data Set Level Suggestions: Implemented data set level suggestions by default while opening the Data Preparation page and highlighting the selected column.
  4. Auto Preparation: The Auto-Prep button gets disabled after it is applied one time on the selected dataset.
  5. Data Profile: The consolidated Data Profile tab is provided displaying Profile, Chart, Suggestion, and Pattern.
  6. Data Preparation Naming: Users need to give a name for Data Preparation to save performed actions on the selected dataset; auto-generated names are assigned if no name is provided.
  7. Profiling Chart: Integrated the profiling chart into the menu bar.
  8. Column Menu: The column menu appears as a drop-down icon instead of a three-bar menu for columns.
  9. Angular Version Update: Upgraded Angular version from 7 to 14.
  10. Transform Drawer UI: Implemented drawer UI for transforms.
  11. Settings: Introduced the Settings icon, opening as a drawer containing SKIP & Total Rows options.
    1. The Settings icon will be disabled for a new Data Preparation based on a Data Set.
  12. Confirmation Drawer UI: A confirmation drawer is provided under the Steps tab for all the transforms that support the configuration dialog box
  13. Column Rename Option: Users can now rename columns by clicking on the drop-down menu icon and get the Rename Column transform in the context menu.
  14. Column Profiling: Improved efficiency for large-scale column changes, reducing time consumption.
  15. Existing Transforms: Modified the Create new Columns checkbox to the New Column Name field for new transforms.
  16. Total Rows: Updated the displayed rows of data on a page to accommodate up to 200 rows per page. At once, up to 5 paginations can show up to 1000 rows.
  17. Source and Sample Size information: Included source and sample size information in the panel below.
  18. Save Icon: introduced a Save icon for saving data preparation. The applied Data Preparation steps will get auto-saved each time.
  19. Close Icon: Replaced the Back icon with the Close icon in the Data Preparation workspace to close a Data Preparation workspace and navigate to the Data Preparation list.
  20. Data Set Info Icon: A newly created Data Preparation based on a Data Set will contain an Information icon in the enabled status. Once the Preparation is saved the Information icon will be removed from the Data Preparation menu bar.

Enhancements

  1. UI updates for Suggestions:
    1. Proper placing of the Suggestions section with the name of the selected column.
    2. The Apply button gets enabled only after the selection of one suggestion.
    3. Implemented highlighting of suggestions for the selected column.

Please Note: The column-level suggestions are displayed based on the selected column from the dataset. The Suggestions section no longer supports the generic suggestions encompassing all columns in the dataset.

Report

New Features

  1. Themes: 8 new themes are provided to visualize the report. They are:
    1. Ultramarine Blue Color Palette
    2. Pacific Color Palette
    3. Briar Rose Color Palette
    4. Ocean Current Color Palette
    5. Pumping Space Color Palette
    6. Barbie Pink Color Palette
    7. Amber Palette
    8. Blue Period

Enhancements

  1. UX Enhancements
    1. The Toggle button provided for switching between the old and new UI is vertical.
    2. The search bar can be increased based on the selection of more dimensions and measures.
    3. A Remove button has been provided to clean the search bar.
    4. The ‘GO’ option has been converted into an icon.
    5. A drag icon has been added for the dimensions and measures in the left side panel on the Design screen.
  2. PDF Export Properties for the Grid chart -A Grid section has been added to choose the PDF export from the following options:
    1. Screenshot- If this is selected the Report gets exported as Screenshots
    2. The Tabular option allows the users to select columns from the Analyze mode for the PDF export.
  3. Tab UX enhancement- Users can interchange the tab location.

Please Note:

  1. The Alert functionality has been deprecated from the Report module.
  2. ML View (Sentiment chart)Column Stack chart is removed from the chart list because it affects the other functionality like Duplicate the View, Download, etc. Some more chart options will be provided in a future release.

Designer

Enhancements

  1. Multiple Label selection is provided for the Checkbox filter component.
  2. TreeMap component - Property to control stroke line between blocks.

Data Science Lab

New Features

  1. Homepage: Introducing the Home page for the Data Science Lab module, providing a centralized hub for accessing key functionalities and project management from the same page.
  2. Feature Store: Implementation of the Feature Store feature enables users to store, manage, and share feature sets across projects for enhanced reusability and collaboration in the data science workflows.
  3. Default Settings Page: A Settings page has been added with default settings for project creation ensuring consistency across projects for the project setup.
  4. Trash: Implementing a Trash page for temporarily storing and recovering deleted Projects and Feature Stores to prevent accidental data loss.
  5. Model Explainability as Job:Introduced Model Explainability as a Job feature for DSL models, enabling users to generate model explanations and interpretability insights as part of automated job processes, facilitating better understanding.
  6. Python Library for Data Preparation: Introduced Python library dedicated to data preparation tasks, providing a comprehensive set of tools and utilities to streamline data processing workflows.
  7. Data Preparation UI Enhancement: Implemented Data Preparation as a Drawer in the user interface, offering a convenient and intuitive way for users to access data preparation functionalities within the platform without disrupting their workflow.
  8. Sequential Cell Execution (UI): Added the ability to execute cells sequentially in the user interface, enabling users to run code cells one after another in a predetermined order, facilitating step-by-step execution in the data science workflows.
  9. Function Parameter Descriptions for Datasets (UI): Added function parameter descriptions for Datasets in the user interface, providing users with clear and concise explanations of parameters to aid in dataset selection and configuration.
  10. Implementation of Append Method in DS Lab Writers: Implemented the append method in the DS Lab writers, allowing users to append data to existing datasets supporting incremental data updates.
  11. Project Creation Page as a Drawer: Modified the Create Project UI to display it as a Drawer for a more streamlined and efficient project creation experience.
  12. Updated the PySpark Version to 3.4.0. to enhance performance and compatibility with the latest technology.
  13. Pull Multiple Utils Files from Git (Normal Projects): Added the ability to pull multiple Utility files from Git in normal projects, enabling users to easily incorporate utility functions and scripts stored in Git repositories into their projects.
  14. Leave site confirmation pop-up has been added to avoid accidental page refresh or closure of the page with unsaved work.
  15. Validation Option: A validation option is available for CPU and Memory at the Project level.
  16. Import File: Provided Import File option for the Repo Sync Projects.
  17. Add Folder Option: The Add Folder option is now available for normal Projects as well.
  18. Registered Models & APIs: Introduced a new tab named Registered Models & APIs to easily access lists of registered Models and APIs.
  19. Introduced the Preview option for files in the files folder.
  20. Implemented Linter support: Linter checks code for potential errors, style issues, and adherence to coding standards, helping to improve code quality and consistency.

Enhancement

  1. Save Notebook: Enhanced service support to provide a seamless user experience while saving a Notebook.
  2. Workspace UI Enhancement: Enhanced the user interface for normal projects by introducing a repository tree within the Workspace tab, allowing users to easily navigate project files and directories.
  3. Enhanced Refresh Option: The user gets redirected to the specific models, tabs, or notebooks while refreshing them.

Data Pipeline

New Features

  1. Script Executor Job: Implemented support for Python, PySpark, Go, and Perl scripts in the Script Executor job.
  2. Athena Reader: Introduced the Athena Reader component for enhanced data reading capabilities.
  3. Python On-Demand Job: Implemented the Python on-demand job feature for efficient execution.
  4. Alert View Update: Replaced the Yellow Information icon in the alert view for improved visibility.
  5. Spark Job Format Flowchart: Added a format flowchart for Spark jobs to streamline job configuration.
  6. Disabled the copy button for Rule Splitter, File Splitter, and Schema Validator components, and disabled it if component metadata is not saved.
  7. Central Monitoring: Implemented central monitoring for pipelines and jobs for streamlined management.
  8. Real-time Data flow: Provided Data flow stream for the active pipeline.
  9. Pipeline Component Version Update: Added functionality to update pipeline component versions for improved compatibility and performance.
  10. Edit Job Button: Added an Edit button on the List Job page for quick access to Job configurations.
  11. Preview Panel Enhancements: Provided download option in CSV, Excel, and JSON formats in the Kafka preview panel for easier data analysis.
  12. CSV File Format Enhancements: Provided Multiline, Custom header, and Separator fields support in CSV file format for HDFS reader, Sandbox reader, Azure Blob reader Spark, and Azure Blob reader Spark Docker (for Job/ Pipeline).
  13. ORC File Type Support: Added ORC file type in Sandbox, HDFS, and S3 including reader and writer (for Job/Pipeline).
  14. Expanded Job Configuration: Enhanced the Job List page with a total configuration view on expanding job details.
  15. Pipeline Overview Enhancements: Enhanced pipeline overview with a customizable color theme, clear level checkbox, logo sizing options, and expanded description of the text area.
  16. Job Trigger Component: Introduced a Job Trigger Component for automated job scheduling and execution.

Enhancements

  1. Data Channel & Cluster Events Page UI Enhancement: Revamped the UI for the Data Channel & Cluster Events page, along with enhancements to all pipeline topics for improved usability and aesthetics.
  2. Default Configuration Page UI Enhancement: Enhanced the UI of the Default Configuration page to provide a more intuitive and user-friendly experience.
  3. System Pod Details Page Enhancements: The Spark operator logs are also displayed on the System Pod Details page.
  4. Job History Page Enhancements: Enhanced the Job History Page to include system logs for Spark jobs within job details history, facilitating comprehensive job tracking and analysis.
  5. Data Metrics Page Enhancement: Added a filter date range button with a drop-down menu to the Data Metrics Page for streamlined data analysis based on specific timeframes.
  6. UI Enhancement: Pipeline/ Job property panel - Upgraded the UI for the Pipeline/ Job property panel to enhance user interaction and navigation during pipeline and job configuration.
  7. Sandbox Writer Enhancement: Improved the Sandbox Writer functionality to support writing data in parts files within a directory, optimizing data storage and retrieval.
  8. Settings Enhancement: Enhanced the Default Configuration page of the Settings section to provide more comprehensive and customized configuration options.

Connect with BDB Expert

Connect Now