In today’s fast-paced business environment, leaders need a broad, deep and connected view of their enterprise to thrive and be sustainable. Demands for quality and actionable data are growing each day, yet many organizations struggle to extract value from their valuable intellectual assets. This is particularly true for any enterprise since cross-functional collaboration is key to success. However, despite investments in data lake projects, many organizations continue to face two major challenges:
1. Highly fragmented information
Key reports with curated data are in Excel and PDF files and live in silos.
More granular information and analytics are in multiple departmental solutions and are not connected in any way to the full picture.
Moreover, business units often build their own version of analytics solutions for each department thereby further increasing complexity with multiple tools and solutions, lacking the connected view that makes these insights actionable.
2. Long cycle-time to build multi-domain analytics solutions
Business and functional experts see value in connecting information across domains of expertise and business units. However, they are disappointed with implementation timelines – often months or more and may still provide little to show in terms of ROI for data infrastructure investments and true value to the enterprise.
The implications are meaningful and lead to a competitive gap in the race for the information advantage. These challenges include:
- Information Complexity
- The barrier to cross-functional collaboration
- Incomplete fact-base for data-driven decision-making
- Blind spots, missed opportunities and increased risk
- Lack of speed and agility in adapting to evolving information needs
- Higher IT costs
Why does this problem occur and how can you solve it?
Fragmented analytics solutions typically result from business units and functions building analytics solutions independently without utilizing standard components and tools to enable integration, this approach is costly and inefficient.
The key to closing this gap is two-pronged:
- Create a simple data governance structure empowering experts who understand and own the data to curate and publish both data and analytics solutions components and federate parts as needed.
- Provide teams with a set of standard data components, common assembly method and authoring tools.
The diagram below outlines an approach to assemble integrated analytics solutions.
The cloud significantly reduces the complexity and time to provision computing resources for the enterprise. Similarly, CHORAL reduces the complexity and time to build analytics solutions by empowering business and functional experts to be authors while building the required data model automatically.
The CHORAL platform and methodology
CHORAL is a new type of data platform which is designed to rapidly build and evolve integrated analytics solutions. Engineered by former IT executives, CHORAL combines domain expertise with a data & analytics platform to address these data challenges with a single application approach. It consists of the following components:
- A library of interoperable analytics solutions. Each solution includes configuration files and sample data sets that are easy to configure and set up. The current library addresses a broad range of management issues in banking and other industries. This library will continue to evolve thereby providing an extensive repository of pre-built components ready to go and use.
- A High performance integrated data layer combining the following capabilities
- A patented nodal network with standard data components to organize and interconnect data to action and use. The nodal network is a simple and effective data pattern to build a wide range of analytics solutions.
- A set of software services to build and manage nodal networks.
- A set of data adapters to connect to multiple data sources
- In-memory analytics enabled by Apache ARROW for speed
- An agile implementation method supported by simple authoring tools.
- Powerful visualization capabilities enabled by Vega-Lite.
Data modeling approach
Data is managed as a product and is subdivided into Domains.
Each domain is led by a team that curates and publishes information & analytics for re-use by other teams. The overall approach is to decentralize data ownership while keeping it connected across the enterprise using data governance and modelling methods.
A nodal network is assembled using standard data components:
- Dimension tables: describing common analytic dimensions
- Fact catalogs: describing lists of items (people, things, concepts)
- Fact journals: containing time-stamped data such as transactions and events
A set of software services is used to build and manage the nodal network
Following a common model pattern across domains simplifies the data model, accelerates implementation and enables automation.
Blueprints enable rapid implementation – We figured it out so you don’t have to!
Developing functional specifications, data models and analytics solutions is time-consuming and costly. Solution blueprints streamline these phases of a project by providing answers to the following questions:
- What are the key issues and questions that need to be addressed?
- What information is needed to answer these questions?
- What analytic capabilities will help us derive insights?
- How do we build a data model to meet these requirements?
- What are examples of the data sets to be collected?
Though some of these concepts may seem complicated in theory in practice they are simple to execute and implement. If you have any questions about integrated analytics or would like to see a demo of how it works, reach out!