Introduction
Welcome to the latest release of the BDB Platform! These
release notes provide a comprehensive overview of the new features, enhancements, bug
fixes, and other important changes included in this update. This release focuses on
improved user experience and a unified UI, as well as the introduction of our Data
Agent: Virtual Data Analyst.
We recommend reviewing these notes thoroughly to understand how the new changes may
impact your workflows and to take advantage of the latest functionalities.
Key Highlights
This section summarizes the most significant new features and
platform improvements in Release 10.0, designed to enhance data analysis, user
personalization, and operational efficiency.
- Enhanced User Experience & Intuitive UI: We have
significantly improved the user experience with a new, unified, and intuitive user
interface that provides a truly tailored experience for both business users and
developers, streamlining workflows and increasing productivity.
- Data Agents - Intelligent Virtual Data Analysts:
We
introduce advanced Data Agents, powered by Agentic AI. These virtual data analysts
are designed to autonomously perform complex data analysis, identify trends, and
generate actionable insights, significantly reducing the manual effort required for
data interpretation and accelerating data-driven decision-making.
- My Insights - Personalized Performance Monitoring:
'My Insights' provides a customizable dashboard for Key Performance Indicators
(KPIs) tailored to individual user roles and profiles. This feature empowers users
to monitor the metrics most relevant to their responsibilities, fostering a deeper
understanding of personal and team performance, and enabling proactive adjustments.
- Micro Functions - Flexible Automation and Triggers:
Micro Functions offer a powerful new way to extend data connector capabilities.
These small, reusable functions can be configured to act as triggers for Data Agents
based on specific data events, or they can be invoked directly from the Dashboard to
execute custom operations. This provides unprecedented flexibility for automating
workflows and responding dynamically to business needs.
- Document Store - Enhanced Data Agent Capabilities:
A
new Document Store has been integrated to facilitate efficient document embedding.
This capability allows Data Agents to process and analyze unstructured data from
documents, enriching their analytical scope and enabling more comprehensive insights
by incorporating textual information.
- Optimized Monitoring and Logging Infrastructure:
The platform's monitoring and logging systems have undergone significant
optimization. This enhancement provides improved performance, greater reliability,
and more granular insights into system operations, enabling better proactive
management and faster issue resolution.
- Automated Data Quality Checks: Introduced robust
data quality checks directly on data connectors. Users can now define custom quality
checks and schedule automated jobs, which will generate insightful dashboards
providing a clear overview of data health and potential anomalies.
- Workspace Segregation for Collaborative Content: To
facilitate efficient collaboration and tailored use cases, we've added the option to
segregate content and content creation based on distinct workspaces. This
enhancement empowers cross-functional teams to work independently on different
projects and use cases within a secure and organized environment.
- Agentic Tools - Empowering Developer Extensibility:
We are introducing Agentic Tools, a new capability that enables developers to create
and publish custom tools. These tools can then be seamlessly invoked by our Agentic
AI, significantly expanding the platform's functionality and allowing for bespoke
integrations and automations tailored to unique business requirements.
- Agentic AI Workloads - Autonomous Task Execution:
Developers can now define Agentic AI Workloads, enabling the platform's Agentic AI
to autonomously perform complex tasks. These workloads are powered by preconfigured
knowledge bases and leverage the newly introduced Agentic Tools, allowing for highly
automated and intelligent task completion across various use cases.
Module-Specific Updates
This section details the changes within each of the eight modules.
Module 1: Landing Zone
The Landing Zone module has undergone a complete revamp in this
release, representing a significant leap forward in user interaction and customization.
Designed as the primary entry point after logging in, it offers a highly personalized
and efficient starting experience.
- Fully Customizable Landing Experience: The Landing
Zone now offers unparalleled customization, allowing users to define their immediate
post-login experience. Whether it's directly launching a Data Agent for immediate
insights or navigating to a specific dashboard, the platform adapts to individual
preferences for enhanced productivity.
-
Persona-Driven UI Tailoring: The user interface within the Landing Zone can
be dynamically configured to cater to distinct user personas. Menus, dashboards, and
visual elements are fully customizable to align with the unique needs and workflows
of both technical developers and business users, ensuring a highly relevant and
efficient environment.
-
Futuristic Unified Access: We have engineered a cutting-edge, unified access
paradigm that provides a seamless and consistent entry point to all platform
functionalities. This streamlined approach minimizes navigation friction and fosters
an intuitive user journey across the entire BDB Platform.
-
Personalized Insight Section for Business Users: Business users now have a
dedicated, customizable insights section within the Landing Zone. Here, they can
configure and monitor key performance indicators (KPIs) vital to their role using
fully customizable widgets, providing immediate access to critical business metrics
upon login.
-
Workspace Selection: Users can now easily select and switch between
different workspaces, leveraging the platform's new content segregation capabilities
to enhance collaborative workflows and manage diverse use cases.
Module 2: Data Center
- New Features:
- Unified User Interface for Enhanced Control:
The Data Center now features a unified user interface, providing a
consistent and intuitive experience for managing all data assets. This
consolidated design simplifies data governance and enhances user control
over data operations.
- Integrated Micro Functions: Micro
Functions are now seamlessly integrated within the Data Center, enabling
users to define and execute small, reusable functions directly on their
data. This allows for more granular data manipulation, custom data
preparation, and tailored data operations.
- Comprehensive Data Quality Management:
Robust data quality capabilities have been integrated directly into the
Data Center. Users can now define, enforce, and monitor data quality rules,
ensuring data integrity and reliability for downstream analytics and
operations. Automated reports and dashboards provide clear visibility into
data health.
- Enhanced Visibility of Tables and Collections:
Data connectors within the Data Center now provide a clear and
organized listing of associated tables and collections. This improved
visibility makes it easier for users to discover, locate, and understand the
structure of their connected data assets.
- Consolidated Feature Store: The Feature
Store has been strategically moved to the Data Center from the DS Lab. This
consolidation provides a centralized location for managing and accessing
curated features, streamlining the process of building and deploying machine
learning models.
- Improved Entitlement Management: Enhanced
entitlement controls within the Data Center offer more robust and granular
access management for data assets. Administrators can now define precise
permissions, ensuring data security and compliance across various user roles
and teams.
- Customizable Visualization Widgets:
Introduced the capability to create and publish highly customizable
visualization widgets. These widgets can be seamlessly integrated as add-ons
in various modules across the platform, including the Landing Zone, allowing
users to embed tailored data visualizations directly where they are most
relevant for quick insights and enhanced data exploration.
- Enhancements & Improvements:
- Enabled display of table and column names within
the MongoDB connector to improve schema visibility and ease of data
exploration
- Integrated AI Assist automatically generates
optimized queries for datasets, enhancing user productivity and simplifying
data access.
- Extended AI Assist capabilities to support
intelligent query generation for Data Stores, streamlining query creation
for complex data environments.
- Introduced functionality to transfer core
ownership within the DataPrep module under the Data Center, allowing better
governance and administrative control.
- Added support for displaying sample records on the
Data Store validation page to assist in quick verification and
troubleshooting of data configurations.
- Bug Fixes:
- When attempting to edit a datastore whose parent
data connector has been deleted, the UI renders a blank page instead of
displaying an appropriate error or fallback.
- Tables residing in non-public schemas are not
being ingested into the metadata store when using PostgreSQL as the
datastore.
- In Redshift connector integrations, tables located
in non-public schemas do not appear in the metadata or info table for
listings within datasets.
- The Elastic database remains unavailable, but the
connector falsely reports a successful reconnection, potentially leading to
misleading status indicators.
- Columns with data types like Decimal or Nullable
(Decimal (5,2)) do not list in the metadata store during metadata ingestion
for the ClickHouse data stores.
Module 3: Data Pipeline
- New Features:
- Ownership Transfer Option:
Introduced the ability to transfer ownership of data pipelines,
facilitating easier management and collaboration across teams.
- New Components:
- Pinot Reader & Writer components:
Added
specialized components for reading from and writing to Apache Pinot,
enabling efficient integration with real-time analytics databases.
- Email Listener component: A new
component
to listen for and process incoming emails, allowing for email-driven
data
ingestion and workflow automation.
- Google BigQuery Reader:
Integrated a new
component for reading data directly from Google BigQuery, enhancing
connectivity to cloud-based data warehouses.
- Schedule Component Invocation Mode:
Enhanced scheduling options allow users to define component invocation based
on scheduled intervals, providing finer control and helping to optimize
resource utilization by running components only when necessary.
- Pipeline Scheduled Runs: Pipelines can
now be scheduled to run at a specified time, ensuring all components within
the pipeline are invoked based on predefined intervals, streamlining
automated data processing workflows.
- Run Workloads on Selected Compute:
Users can now specify and run data processing workloads on selected compute
resources based on their specific processing requirements, optimizing
resource utilization and performance for diverse tasks.
- Enhancements & Improvements:
- Improved User Experience and Unified Design:
Implemented a new unified design for the data pipeline workspace,
enhancing user experience and workflow efficiency.
- Entitlement for Data Pipelines:
Introduced robust entitlement management for data pipelines, ensuring secure
access and control based on user roles and permissions.
- Improved Logging and Monitoring:
Enhanced logging and monitoring capabilities specific to data pipelines,
providing more detailed insights into pipeline execution and performance.
- Workspace-Based Access Segregation:
Implemented access segregation based on workspaces, ensuring that users can
only access and manage data pipelines within their authorized work
environments.
- API Ingestion Component Performance
Improvement: Enhanced the performance of API ingestion components
through the introduction of caching mechanisms, significantly speeding up
data retrieval.
- Fine-tuning of Auto ML Component:
Optimized the invocation of the Auto ML component to improve resource
utilization and enhance efficiency in machine learning model training and
deployment.
- Bug Fixes:
- Fixed an issue where data transformations
occasionally resulted in incorrect outputs under specific conditions.
- Resolved a bug preventing the proper sequencing of
certain parallel processing tasks.
- Addressed an issue with the Event-hub subscriber
component offset handling.
Module 4: Jobs
- New Features:
- On-Demand Invocation Method for PySpark Jobs:
Introduced an on-demand invocation method for PySpark-based jobs,
enabling users to trigger these jobs via API with a custom payload,
providing greater flexibility for integration and automation.
- New Alert Channel (Email): Added email as
a new alert channel for job success and failure notifications, allowing
users to receive immediate updates on job status.
- Cluster (Nodepool) Selection for Job
Deployment: Users can now choose the specific cluster (nodepool) on
which their jobs will be deployed, providing better control over resource
allocation and performance optimization.
- Google Cloud Storage Reader and Writer
Components: Integrated new Google Cloud Storage reader and writer
components within Spark jobs, enabling seamless data interaction with GCS.
- BigQuery Reader in Spark Jobs: Added a
BigQuery reader component specifically for Spark jobs, enhancing direct data
connectivity to Google BigQuery from your Spark environment.
- Enhancements & Improvements:
- Version Control: Implemented
comprehensive version control for jobs, allowing users to track changes,
revert to previous versions, and manage development workflows more
effectively.
- Scheduling Optimization: Enhanced job
scheduling capabilities to improve resource allocation and execution
efficiency, ensuring timely completion of tasks.
- Logging and Monitoring Optimization:
Optimized logging and monitoring for jobs, providing more granular
insights into job status, performance, and potential issues for quicker
debugging.
- Log Storage Optimization: Introduced a
mechanism to automatically remove old logs, reclaiming storage space and
managing storage costs more efficiently.
- Entitlement for Jobs: Introduced a
mechanism to automatically remove old logs, reclaim storage space, and
manage storage costs more efficiently.
- Bug Fixes:
- Resolved an issue that caused slow load times on
the Job List page. Optimizations were implemented to improve data fetching,
resulting in faster load times and a smoother user experience.
Module 5: Data Science Labs
- New Features:
- Agentic Tools Development Environment:
BDB's DS Lab now empowers users to seamlessly define, develop, test, and
publish specialized Agentic Tools for AI agents. This dedicated module
provides a comprehensive environment where data scientists and developers
can craft custom tools precisely tailored to the needs of their AI agents,
extending their capabilities and enabling highly specialized automated
tasks.
- Unified Model Repository: The Unified
Model Repository in BDB's DS Lab is a
central hub designed to streamline the entire lifecycle of your machine
learning models, from development and evaluation to deployment and
consumption. This powerful new feature provides a single, cohesive
environment for managing all your models, ensuring consistency, governance,
and ease of use.
- Unified User Experience: DS Lab is
meticulously re-designed to offer a unified and intuitive user experience,
ensuring that users can seamlessly navigate across all its powerful features
without friction. Our commitment to a cohesive interface means that data
scientists, ML engineers, and analysts can effortlessly transition between
different stages of their machine learning workflow, enhancing productivity
and reducing cognitive load.
- Enhancements & Improvements:
- Persistent Notebook Saving: This
enhancement allows users to save their work within a notebook regardless of
whether the associated project is currently active or deactivated, ensuring
data persistence and preventing potential loss of progress.
- Enhanced Notebook Output for Tabular Data:
We have improved the display and rendering of tabular data within
notebook
output cells. This provides a more structured and user-friendly view of data
frames and other table-like structures, making analysis more efficient.
- Notebooks in New Browser Tabs: Users can
now open individual notebooks in new browser tabs. This feature facilitates
multitasking and allows for easier navigation between different workbooks
or simultaneous viewing of multiple notebooks.
- Code Auto-completion: To streamline the
coding process and reduce errors, we've integrated an auto-completion
feature. This provides intelligent suggestions as users type, accelerating
development and improving code accuracy.
- Improved Notebook Code Editor: The
integrated code editor within notebooks has undergone enhancements to
provide a more robust and intuitive coding experience. This includes
improvements to syntax highlighting, error detection, and overall editor
responsiveness.
- Refined Logging System: We've implemented
improvements to the logging mechanism within the DS Lab. This provides more
detailed and actionable insights into system processes, user activities, and
potential issues, aiding in troubleshooting and performance monitoring.
- Bug Fixes:
- Resolved auto-scroll malfunction for cells with
overflowing content.
- Resolved an issue where plots were not generated
in the Explainer Dashboard for forecasting model experiments.
- Fixed an issue where data could not be retrieved
from the S3 bucket using SparkSession in the PySpark environment.
Module 6: Data Agents & Document Agents
We are thrilled to announce the introduction of our
1. Data Agent (Virtual Data Analysts), designed to revolutionize how your
organization extracts insights and makes data-driven decisions.
2. Document Agent: This comprehensive document intelligence capability
transforms how organizations interact with their knowledge base, enabling instant access
to insights buried in documents, accelerating research and analysis workflows, and
ensuring critical information is never overlooked
- Intuitive Natural Language Configuration (Knowledge
Repository):
- Business experts can now directly "teach" their
Data Agents using plain English. The right-hand panel during agent creation
serves as a dynamic Knowledge Repository, allowing you to define the agent's
core mandate, operational guidelines, critical output distinctions, and how
it handles various analytical requests (e.g., visual analysis, data
listings, action proposals) using natural language.
Benefit: This eliminates the need for technical intermediaries,
enabling
subject matter experts to directly imbue the AI with their domain knowledge,
ensuring the agent's behaviour and output perfectly align with business
needs.
- AI-Generated KPI Suggestions for Accelerated
Configuration:
- Our platform now intelligently assists in agent
setup by proactively generating a list of relevant KPI suggestions. After
analysing your connected data sources' metadata and the agent's knowledge
base, the AI identifies key metrics and potential insights.
Benefit: This accelerates the agent creation process, helps uncover
valuable, often overlooked, metrics, and ensures your Virtual Data Analyst
is focused on the most impactful performance indicators from day one.
- Enhanced Autonomous Decision-Making & Proactive
Insights:
- Leveraging the latest advancements in Agentic AI,
our Data Agents are now even more capable of autonomous perception,
reasoning, action, and continuous learning. They can proactively monitor
data, identify emerging patterns, and flag anomalies without explicit
prompts.
Benefit: Shift from reactive analysis to a proactive intelligence
model,
allowing your teams to anticipate trends and respond swiftly to market
changes.
- Streamlined Role-Based Publishing & Access Control:
- The process of deploying your custom Data Agents
to specific users or roles within your organization has been refined for
greater simplicity and security.
Benefit: Ensures that the right insights reach the right people with
appropriate data governance, empowering targeted self-service analytics
across departments.
- Intelligent Document Analysis:
- Our Document Agent specializes in processing and
analyzing unstructured documents (PDFs, text files, research papers,
contracts, reports) to provide precise answers to natural language
questions.
- Multi-Vector Database Support:
- Leverages enterprise-grade vector databases
(ClickHouse, Qdrant) for fast semantic search and context retrieval across
large document collections.
- Hybrid Search Capabilities:
- Combines dense vector search with sparse (BM25)
search for optimal document relevance, ensuring both semantic similarity and
keyword matching.
- Metric & Tracing:
- Comprehensive monitoring of LLM usage for cost
optimization and performance analysis
- Advanced Action Identification and Proposal:
- Beyond just providing analytical insights, Data
Agents can now identify potential next steps or trigger events based on
predefined scenarios and their data analysis.
Benefit: Transforms the agent from a pure analytical tool into a
strategic
partner that can suggest actionable recommendations, closing the loop
between insight and execution.
Module 7: Reports (Self-Service Reports)
- New Features:
- Story Module UI/UX: Introduced a
redesigned user interface and user experience for the Story Module,
providing a more intuitive and engaging environment for creating and
presenting data narratives.
- Data Loss Protection in Reports:
Implemented enhanced data loss protection mechanisms within the
reporting module, safeguarding critical report data against accidental loss
or corruption.
- Enhancements & Improvements:
- Custom SQL or MQL Formula Support:
Enabled the use of custom SQL or MQL (MongoDB Query Language) formulas
directly within reports, providing advanced users with greater flexibility
for complex data manipulation and analysis.
- Bug Fixes:
- Formula Save Operation Issue (Modular
Operator): Fixed an issue in the new UI where the "Save formula"
operation was not functioning correctly for Modular operators in Pinot
database configured spaces.
- Export Tooltip Display: Resolved an issue
where the export tooltip in the new UI was appearing with a black
background, improving visual consistency.
- Access Denied on Formula Save (Non-Admin
User): Fixed an access denied error encountered by non-admin users
when attempting to save formulas in the new UI's Report module.
- Visual Clarity of Checkboxes and Lines:
Corrected an issue in the new UI where checkboxes and lines in certain
report cases were not sufficiently dark, improving visual clarity.
- Component Not Removed from Story:
Resolved a bug where a component was not being removed from a Story when its
associated data store was deleted from the backend.
- Filter Dropdown Alignment Issue:
Fixed an alignment issue in the new UI's Filter dropdown for Measure and
Date fields.
- BS Chart Precision Update:
Updated the default precision for BS Charts from 2 to 0 in the properties,
ensuring more appropriate data representation.
- Reduced Chart Size and Scrollbar in Chart
List: Optimized the Chart List in the new Report UI by reducing
chart size and removing unnecessary scrollbars for a cleaner view.
- Report Chart Colors Reflect New UI Theme:
Ensured that Report Chart colors in the Chart List and Design View of the
new
Report UI now accurately reflect the new purple UI theme for visual
consistency.
- Missing Supported Chart List in ML View:
Fixed an issue in the new UI's Report ML view where the list of supported
charts was not appearing.
- Drawer Width Inconsistency:
Corrected the width inconsistency in the report module drawers (e.g., Change
theme & Live refresh), ensuring all drawers have a consistent width.
- Disabled Apply Button in Filter Panel:
Addressed a bug where the "Apply" button in the filter panel was not
disabled until a filter was applied when the panel was attached above.
- ML View Creation Issue (ClickHouse):
Resolved an issue on the Dev Server where ML views could not be created in
ClickHouse configured spaces.
- Calculated Measure Fields Sum Issue (Pinot):
Fixed a bug in the BI Story on the DEV server (Pinot configured space)
where calculated measure fields using functions like Percentage and Division
did not return the correct sum value in the Story Validation Board.
- Conditional Color Dropdown Icon Size:
Corrected the size of the dropdown icon in the Conditional Color feature,
which was previously too large.
- Header Configuration Alignment:
Addressed an alignment issue in the Header Configuration section where
properties were not aligned properly after being enabled.
- Missing Export Icon Tooltip:
Fixed the missing tooltip for the Export icon, enhancing user guidance.
- SQRT Function Issue in Arithmetic Operation:
Resolved an issue with the SQRT function in arithmetic operations (-) on the
Dev server (Postgres DB configured space).
- Multiple Store Report UI Layout: Fixed an
issue where the UI layout was not rendering properly when multiple data
stores were included in a single report.
- Formula Field Count Discrepancy: Corrected
a bug in the new Report UI where the Formula Field Count showed 0 despite
existing columns.
- Chart List Display Issue: Addressed an
issue in the new Report UI where the chart list was extending below the
screen.
- Search Bar Button Color: Updated the
"Remove" and "Go" buttons in the search bar of the new Report UI to reflect
the new UI color scheme.
- Column List Auto-Expansion: Fixed a bug in
the new Report UI where the column list was not auto-expanding when
searching in Dimension/Measure fields.
- Dimension Column Icon Alignment: Corrected
the alignment of the 'ABC' icon for dimension columns on the Design Page in
the new Report UI.
- Unable to Save Last Tab: Resolved an issue
in the new Report UI where users were unable to save the last tab.
- Long Tab Names Visibility: Improved the
display of long tab names in the new Report UI to ensure they are fully
visible.
- Reset Icon Visibility: Fixed an issue
where the reset icon was not properly visible when columns were attached to
the above panel from the Global filter drawer in the new Report UI.
- Incorrect Date Range in Drill Into (BI Story):
Addressed a bug in BI Story's Drill Into feature on Date Columns, where
an incorrect date range was applied for month-based selections.
- Range Functionality Issue (STG server):
Resolved an issue where the Range functionality was not working on the STG
server across Postgres, ClickHouse, and Pinot databases.
- Custom Formula UI Dropdown: Fixed a UI
issue in Custom Formula where the dropdown was not disappearing when
manually updating double quotes in the formula.
- NLP Search Dimension Value Filter Issue (BI
Story): Corrected a bug in BI Story's NLP Search where, after saving
a view with a dimension value filter, the selected dimension value did not
display in the View Filter in the Storyboard View.
- Top/Bottom NLP Query Issue (Pinot & Mongo):
Addressed an issue in Pinot and Mongo Datastore spaces where NLP queries
containing "top/bottom" were not working as expected, showing complete
results instead of filtered ones.
- IF-ELSE Formula Result Discrepancy (Mongo):
Fixed a bug in Mongo Formula where, in IF-ELSE statements, if the return
statement contained a measure column and the else case was "1," the result
displayed "0" and "1" instead of the measure column.
- IF-ELSE Formula Save Issue
(ClickHouse/Postgres/Pinot): Resolved an issue in ClickHouse,
Postgres, and Pinot Datastore settings where an IF-ELSE formula could not be
saved if it was of Dimension type with a return statement based on a
Dimension, and the condition was related to a Measure.
- Date-Related NLP Query Issue (Pinot):
Fixed an issue where date-related NLP queries were not working in the
Pinot
datastore.
- Measure Series Properties Display:
Ensured that only Quantile and Collective Aggregations are displayed in
Measure Series Properties for Benchmark Stick and Candle Stick Charts.
- Interaction Data Not Available Message:
Implemented a message "Data not available" when interacting between two
views with different stores in a single report if there is no valid data to
interact with, improving user feedback.
Module 8: Dashboard Designer
- Enhancements & Improvements:
- Enhanced Legend Support: Improved legend
functionality for charts, now supporting both categories and subcategories,
enabling clearer data interpretation and more detailed visualization
breakdowns.
- Refreshed UI/UX Design: A completely new
user interface and user experience design have been implemented, providing a
modern, intuitive, and efficient environment for dashboard creation and
management.
- Unified Header Menu Navigation & Alert
Integration: The header menu navigation has been updated to align
with the consistent design of the home module, including seamless
integration of alert icons for a cohesive platform experience.
- DataGrid Cell Merge Functionality (via Custom
Script): Added the capability to merge cells within DataGrid
components through custom scripting, offering greater flexibility in data
presentation and reporting.
- Data Formatter Support for Tabular Exports:
Implemented data formatter support for tabular exports to Excel/CSV and
PDF formats, ensuring consistent and accurate data representation in
exported reports.
- Responsive Dashboard Preview: Dashboards
now feature improved resize and auto-scaling capabilities in preview mode,
ensuring optimal viewing and layout across various screen sizes and devices.
- Bug Fixes:
- Designer:
- PPT Export Service not working
- Filter Saver Component not working
- WebSocket - Send message twice
- Charts (Sankey, Waterfall, Scorecard,
Knowledge Graph):
- Charts now display a "Data not available"
message when no data is present for the selected filter.
- DataGrid Component:
- Resolved duplicate rows issues after
sorting when empty rows are present.
- Improved case-sensitive sorting.
- Corrected conversion of non-numeric
numbers with commas (e.g., "30,50") to numeric format (e.g., 30.50).
- FilterChips Component:
- Fixed issue where the Additional Filter
Popup was not visible sometimes when showAdditionalFilter = true.
- Scorecard:
- The Export header title now visible.