Data is the new fire

Home » blog » Data is the new fire


Surely you have heard “data is the new gold” quite a few times in the past years, so why would we come up with another comparison? Because the “data is the new gold” analogy is flawed. It is not a bad one, but it is not great either.

Most of you will think of gold as the product used in jewelry and luxurious products. Hardly the image one wants to convey when talking about data; it is not a luxury and should not be used as jewelry!

Some of you may think of gold as the rare element used in technology, used to create advanced systems providing new insights. This one is closer to the truth, as data is indeed the basis for very advanced solutions providing new insights.

However, data is not a rare element. In 2020, 59 zettabytes (59 x 8 x 1021 bits) of data were being created, captured, copied, and consumed in the world[1]. This amounts to 59 billion 1TB hard disks, hardly what one would call rare!


Fire is one of the most important elements in human history. Without fire, mankind would have never transformed from a migratory to a sedentary existence. We would have invented neither cooking, iron works, transportation nor the burning man festival. So, it is safe to state that fire is quintessential to the current state and evolution of mankind. Likewise, data has become quintessential to the society we are in today.

Data is everywhere you look, from your social security number to your bank statements, it is the online streaming you watched last night or the pizza you had delivered to your doorstep. It is also your child’s school results, the speeding ticket you received or the medical checkup you had last week. It is hard to imagine a society where you could do anything without any data being involved. (A society based on trading goods you say? And how are you going to tally your expenses or define a conversion factor between goods? You’ll need logic and data for that.)

In fact, one could argue that everything which is not 0, is data. And yes, Warren Buffet will certainly argue that a 0 is also very important data to him!


But isn’t fire the cause of so much destruction and loss of knowledge? The Great Fire of London and the fire in the Great Library of Alexandria[2] did not exactly contribute to the progress of society.

Indeed, uncontrolled fire is destructive in nature. As is uncontrolled data. Nothing can keep you further from the truth than a set of data which is incorrect or incomplete, or which was garbled together or for which you lack the context and the insights to understand it.

This brings us to the heart of this paper: data in itself is pretty useless, even if it is abundant. It is only when we transform that data into information, that we can leverage the abundance and fact that our entire society is based on it.

Just like fire is used to smelt iron, bake bricks, and cook food; transforming raw materials into products with a much higher value than before, data must be used to create sets of information which provide the understanding and insights which we need. As one learns in wilderness survival training: fire is your friend, and it is sad when your friend is gone. This also applies to data: you will really miss it when you do not have it!


A simple way to make this tangible, is our Data Mount:

Each new camp on the Data Mount adds value to the initial data. Starting from a large set of unorganized data, we need to add context and meaning, to give the data its importance, effectively turning it into information.

The information then needs to be interpreted. How can we apply this information to our goals, how is this information related to other information? It is the information in a larger context which gives you the understanding you need to make decisions.

Finally, applying this understanding to your actions, making those decisions, and seeing the consequences, provides you with insight.

Due to the ever-changing nature of data, there is a constant flow of change between these 4 concepts. We will find new paths to the top.


Turning data into information, that is your first goal. This may look like a simple or very daunting task, depending on your degree of optimism. The trick is to keep the scope both manageable and useful.

Start with a dataset which you (at least partially) know and of which you know that it can be used to create a tangible result, e.g. a report. Nothing is as frustrating as analyzing and structuring a dataset which does not amount to anything.


The combined finance and risk organization of a financial institution struggled with the availability, quality, completeness, and timeliness of the data they required. This resulted in late reports, incomplete and erroneous reports, and deficiencies in the risk models.

Mount Consulting was asked to help, which we did through a 2-phased approach.

Phase 1: define your high-level datamodel
You need to have a target to work against, to validate your choices, to ensure that you are still going in the right direction. In this case, this was a high-level datamodel for the combined finance and risk requirements. This consists of Data Objects (think tables) and their correlations. Every subsequent step will then be a refinement of one of more of these Data Objects, while ensuring that the consistency as designed in this first step is maintained.

Phase 2: bring in the details
Pick a set of information which you want to produce, e.g., a report. Do not start from a Data Object as you will quickly realize that the data from one Data Object cannot be converted to useful information as it lacks context. Pick your context, break it down to the underlying data and then start clearly defining the Data Attributes and assigning it to the Data Objects.

Once you can show and celebrate your first success, select a next information need and continue the exercise. The Data Objects will grow in content, the exercise will get easier as more information is already available. Make sure to safeguard the overall consistency, as defined in Phase 1.


At Mount Consulting we have extensive experience with analyzing, modeling, integrating, cleansing, defining, reporting and designing data and data solutions for finance and risk. Whether regulatory required or internal data, for accounting or risk management, we know the processes and the data they require.


[2] It is believed that the Library actually declined over centuries, rather than being destroyed in one particular event

Wil je meer weten?

We praten graag verder over hoe we jou kunnen helpen!