Data enablement in a Big 4 Bank
- The innovation team of a Tier 1 Australian bank was created to compete in the market against the newly founded ‘neo-banks’.
- They wanted to provide a modern and personalised user experience to allow it to compete with the more agile and data-centric neo-banks.
- To achieve this they are utilising the wealth of data that the core bank has captured to power data and ML driven products such as expense prediction, product recommendations and financial coaching.
- Eliiza Data Engineering was invited to be part of the initial data enablement team to help define, design and implement key data systems.
- The data enablement team is an integral part of this ‘new’ bank tasked to ingest data from on-prem and third-party data sources, build the processes to make this data useful, and distribute this data to the ML and application development teams.
- As part of the foundation team we needed to create a pattern or data ingestion which could rapidly scale as innovation the team grew.
- The pattern needed to provide Governance and Security over the data inline with the bank and APRA regulations.
- The ingestion system needed to work with varied sources (Streaming to Batch file and Databases) and formats (JSON, CSV, Fixedwidth) of data and make it usable to downstream consumers.
- The Eliiza team created a data broker framework.
- The framework allows for quick and easy creation, configuration and deployment of data ingestion pipelines using JSON.
- Data format conversion, validation, governance and PII processing is all automated within the framework.
- Time to production for data pipelines is dramatically decreased so application and ML engineers can focus on building products and getting them to market.
- The data broker framework is being further integrated with the bank’s data governance systems.
- The core bank is looking to use this framework to replace the current on-prem data lake ingestion framework.