Posted in:

5 Data Warehousing Best Practices to Make the Most of Your Data

© by Dreamstime.com

Today, business leaders recognize how important it is to leverage data-driven decision making to stay relevant in their industry. A study from PwC validates this by revealing that data-driven organizations are 3x more likely to deliver better results than businesses that rely solely on intuition and experience to guide their strategies. 

This makes sure that your data warehouse can scale consistently as the volume of data increases unlike single-instance data warehouses.

However, wanting to make use of data to answer business questions and actually being able to make use of your data consistently and affordably are different things. Businesses typically initiate data warehousing projects to produce more reliable, accurate insights. But these initiative are complex and can be quite challenging to take to the finish line.

We present 5 best practices for data warehousing for maximizing the chances of success for your next project.

  • Choose wisely between ETL and ELT

There are two methodologies for moving and transforming data from different sources to a data warehouse: Extract-Transform-Load (ETL) and Extract-Load-Transform (ELT). Deciding between ETL vs. ELT is an important consideration for a data warehouse project.

In an ETL workflow, data processing is done at the source or staging servers and it is expected that no further changes will need to be made at the data warehouse for reporting. Whereas in an ELT workflow, data processing is done directly in the data warehouse server, which provides more flexibility for transforming data on a needs-basis for reporting and analytics. 

ETL had always been the de facto standard for data warehouses until cloud-based database services came along. With higher processor capabilities, cloud-based services now offer flexibility to use an ELT workflow without incurring prohibitive costs.

The benefits of choosing ELT over ETL are:

  • You do not need to know the transformation logic beforehand while designing the data pipelines and flow structure.
  • Only data that you need for reporting needs to be transformed as opposed to all data being transformed as in the ETL workflow.
  • ELT is a better approach for handling unstructured data since you do not know the variety and volume of such data beforehand. 
  • ELT is suited for cloud data warehouses where you have can easily scale the storage and processing power based on your requirements with minimal overheads. 

On the other hand, the benefits of using ETL over ELT are:

  • ETL is better suited for on-premise data warehouses where you have limited processing power or do not want to spend top dollar on a high-performance data warehouse server as needed in ELT.
  • If you have sensitive information (such as customer personal data), then ETL can be used to redact or remove such data before loading it into the data warehouse. This helps satisfy regulatory requirements such as those from GDPR or CCPA.

However, it is important to mention that the right choice will depend on how you have built your data architecture and what your requirements are.

  • Architecture considerations
Designing and implementing the right architecture is critical for the success of a data warehousing project. Here are the best practices for choosing a data warehouse architecture:
For any modern data warehouse, it is important to use architectures that are based on massively parallel processing. This makes sure that your data warehouse can scale consistently as the volume of data increases unlike single-instance data warehouses.  Developing composable architecture through tools like The Composable Codex is also an option.
  • If using ETL, decide the data model as early as possible. Ideally, you should have the data models defined during the design phase itself and before any ETL pipeline is configured.
  • Consider choosing cloud-based data warehouses, especially if you are operating with a limited budget or if you wish to work with real-time data or want to scale your data warehouse to massive volumes of data.
  • Make use of metadata-driven ETL

A modern business has to deal with a lot more than just a couple of transactional databases. There are in-house solutions, SaaS/cloud-based services, and many other sources of data that generate a massive amount of data. Therefore, it is important that you create robust data pipelines that can support all of these source systems seamlessly.

Typically, developers working on a data warehouse project prefer manually coding data pipelines for ETL processes for each source system. While this gives you complete control over how, when, and what data you move, this approach can lead to several issues if any changes are done on the source system (which may not always be in your control e.g., if you are using a third-party service).

The modern data warehouse approach to data pipelines is to make use of metadata-driven ETL to automate many processes. In the metadata-driven approach, some information about all tables on the source system(s) and the target system are maintained in a central repository. This metadata typically includes table schemas, relationships between tables, data types for individual fields, and other information about the databases. In addition to this, any data transformations or data mappings that need to be performed on the source data are also stored in the central repository. 

There are several benefits of using a metadata ETL approach, which include:

  • The ability to be able to design flows and pipelines at the logical level rather than having to work with hard-coded pipelines. Therefore, there is a lot of increased flexibility with this approach e.g. a change in a source system can be simply catered to by applying the change in the central repository.
  • The ability to execute data pipelines on-demand allows you to refresh your data warehouse for the timeliness of reporting. With hard-coded data pipelines, you are limited to either scheduled-based refresh or need to have some technical know-how to be able to run your ETL pipeline on-demand. 
  • Keep data latency in mind

Let’s suppose you are looking to create a sales campaign for customers looking to purchase a product from your portfolio. The more current the data you use, the better you will be able to gauge how much sales you can generate. For example, if your customers’ interest data is outdated, some customers may have already purchased the product in question, leading to inaccurate sales projections. Similar to this, there are many business processes such as supply chain and fraud detection that rely on up-to-date information. 

Therefore, the traditional approach of loading batches of data at end of day may not be suitable for you. Fortunately, there are several approaches you can adopt to make sure you are getting updated information in real-time.

One approach is data streaming that involves updating the data warehouse as soon as data becomes available at a source. The issue with this approach is that it has a significant resource overhead and the possibility of duplicate or erroneous data showing up in the data warehouse. 

In contrast, a better approach to data streaming is to make use of micro-batching that offers minimal data latency with better data quality. This approach involves loading small batches of data into the data warehouse frequently at short intervals to achieve near real-time reporting. The exact interval that you set depends on your requirements though, since it can be anything from minute-to-minute for a fraud detection system to hour-to-hour for customer interests’ data.

Having said this, keep in mind that the approaches above do not need to be used for all the data in your data warehouse. A smarter way of achieving real-time reporting would be to only make use of the approaches above for time-sensitive information that you need in real-time only. Historical data or nonurgent data can still be processed through regular batch processing.

  • Improve efficiency with data pipeline orchestration

A Gartner study shows that 87% of organizations have low business intelligence maturity. One key reason behind this is the lack of accessibility of data since most organizations are not able to extract insights from their data at the right time. 

This is where data orchestration can help organizations build self-powered, automated data pipelines that can be used to analyze data in almost real-time. With the help of data orchestration, organizations can automate the process of combining, cleaning, and transforming their business data. This provides them with increased flexibility so they can cater to ever-increasing volumes of data.

Data orchestration is particularly important for insight-driven businesses because it enables them to make the most out of business intelligence without having to rewrite or modify code for existing data pipelines.

Conclusion

Managing a data warehousing project can be very challenging, especially when you are working with massive volumes of data. However, by following the best practices that we have shared in this blog post, you can increase your chances of succeeding in your next data warehousing initiative.

Sources: https://www.gartner.com/en/newsroom/press-releases/2018-12-06-gartner-data-shows-87-percent-of-organizations-have-low-bi-and-analytics-maturity?utm_source=xp&utm_medium=blog&utm_campaign=content

Read for more blog: Lay on hands 5e