Numbers don’t lie

Numbers are fascinating, but they don’t make sense without the right context and interpretation. This post helps understand the basics of data storage, which will further help businesses analyze or visualize the data to get actionable insights.

91social
4 min readMay 29, 2020

Big Data and data analytics is no longer a discussion and implementation area just for large businesses. Small and medium enterprises can leverage it too, thanks to cloud computing, which has made data storage and processing easier and cheaper. This brings me to some interesting stats which further emphasize the importance of data in decision making.

Data can answer many questions which could help a business grow. For example, how are customers using a product? Which customer segment generates more revenues? To be able to fully leverage the data, and understand how it can be further analyzed and visualized, it helps to know the underlying concepts about how data can be stored.

Data can come from multiple sources, like from the day-to-day transactions of a business, usually referred to as transactional data; details of customers, suppliers, employees, etc. referred to as master data; feedback from the customers or, social media traffic, and so on. To be able to make good use of all of this data, they need to be stored in a central data repository, from which the required data can be queried or fetched. There are a series of actions involved in transferring the data from an individual data source to a central repository.

Data Ingestion — It is a process of transferring the raw data from various sources, say, spreadsheets, databases, web and SaaS applications like CRM, SAP, etc. , and data from the internet to a central repository, a data warehouse or, a data lake. It happens in multiple ways, but the most common methods are batch processing and real-time streaming.

In batch processing, the data from the source to the destination is transferred in groups periodically. To further simplify it, assume that I have transactional data on the sale of products. This data is captured and stored in a destination, for future analysis. Each day, I schedule a job to group all the records that have been updated today and transfer them to the central repository.

In contrast, real-time streaming works in a way that as soon as new data enters into a source system, it is then transported immediately to the destination. This process is a little expensive, as more hardware and networking resources are required for the real-time movement of data.

Data Pipelines and ETL — A data pipeline connects multiple data sources to a destination through the use of various software tools, series of actions, and through certain processes, to integrate multiple data sources into one repository. Integrated data thus helps in getting better insights for the businesses to use them.

ETL(Extract, Transform, Load) is a subset of the data pipeline, in a sense that structured or transactional data is first extracted from the source systems, is then transformed into the schema(structure) of the destination system(data warehouse), and is finally loaded into the warehouse for future analysis and storage.

Data warehouse & Data lake — They’re both central repositories for storing large amounts of data from multiple sources. However, a key difference lies in the type of data that is collected by each of these.

A data warehouse is typically used to store structured data, be it current and historic transactional data or operational data. A data warehouse already has a defined schema(structure), and the modern cloud-based data warehouses also offer the advantage of loading the extracted data into the warehouse and then transforming it according to the business needs. The process is similar to that of ETL, but with a change in the sequence of transformation and loading. These warehouses make integrated data accessible and visible for businesses to analyze it and make better decisions.

A data lake, on the other hand, can store various data types, structured, semi-structured, and unstructured all in one repository, providing businesses the flexibility to collect as much data as possible, without having to worry about how it can be stored and used. However, to analyze all of this data, businesses need to either invest in an advanced analytics tool or, need to have an in-house data science team, to benefit from the predictive and prescriptive insights from the data, which is expensive for some small businesses. But, the cost-benefit of a data lake over a data-warehouse is that typical cloud-based data lake offerings are relatively less pricey than that of a data warehouse.

Source: EY Analytics Value Chain

Business Intelligence — For a typical small or medium business, having access to data helps them understand how their business has performed in the past and lets the executives evaluate themselves vis-a-vis the industry they operate in. This is where business intelligence tools such as reporting and dashboarding tools help businesses, using the integrated data to mine and come up with insights that help businesses take necessary actions. A Sisense analytics report states that small businesses, between 51 and 200 employees, have found new use-cases of their data, which helps them navigate through this COVID-19 crisis. Such is the power of business intelligence.

Dashboard from Datasahi

At 91social, we’ve helped many businesses build data warehouses and customized dashboards that have helped businesses make data-backed decisions. If you’re interested to know more about this, contact us at social@91social.com or on +91 95130 07587.

--

--

91social

We provide technology services to businesses. Our mantra is to help businesses 'Go Digital'.