Skip to main content

Data restriction

The data restriction (or data crunching in English) is an information science method that makes the preparation of automated processing of large amounts of data and information (Big Data) feasible. Data Crunching is about the preparation and modeling of a system or application that is used: The data is processed, ordered and structured to execute algorithms and program sequences on them. Thus, the term compressed data restriction refers to data that has already been imported and processed in a system. Similar terms include "data munging" and "data wrangling" - these refer more to manual or semi-automatic data processing, which is why they are significantly different from "data crunching."

General information on the subject

The ultimate goal of data processing is a deeper understanding of the matter that must be conveyed with the data, as in the field of business intelligence, so that informed decisions can be made. Other areas where the data restriction they are medicine, physics, chemistry, biology, finance, criminology or web analytics. Depending on the context, different programming languages and tools are used: Whereas Excel, Batch and Shell programming was used before, languages such as Java, Python or Ruby are now preferred.


Data analysis, however, does not refer to exploratory analysis or data visualization, which is carried out through special programs that are adapted to your area of application. Data compression is more about correct processing, so that a system can do something with the records and the data format. Data restriction is thus a preliminary data analysis procedure. This procedure, like the data analysis itself, can be repetitive when the result of the restriction procedure includes new data or errors. This means that the program sequences can be repeated until the desired result is achieved: an accurate and correct collection of data that can be processed directly or imported and that does not contain errors or failures.

Practical importance

Most data processing tasks can be simplified in three steps. First, the raw data is read into a format chosen as the next step. In conclusion, the data is output in the correct format, so it can be processed or analyzed.[1] This trichotomy has the advantage that the individual data (input, output) can also be used for other scenarios.

Some data restriction apps are:

  • Post treatment of inherited data within a program code.
  • Converting from one format to another, for example plain text to XML data records.
  • The correction of errors in the data sets, whether they are spelling or program errors.
  • Raw data extraction to prepare for post-evaluation.

As a general rule, you can save a lot of time with data compression, since you do not have to perform the processes manually. Thus, especially with large data sets and relational databases, data compression can be a significant advantage. However, an adequate infrastructure is necessary to have the necessary computing power for such operations. A system like Hadoop, for example, distributes the load of the PC through various resources and performs arithmetic processes in computer clusters. Use the principle of division of labor.

Importance for Online Marketing

Problems in the areas of online marketing, web design, and web analytics can often be solved with data compression. Large online stores rely on these effective methods. For example, if it is assumed that 10,000 records in a relational database are automatically converted to a different format so that the relevant products of the interface can be displayed, the method of choice is data restriction. Fundamentally in the case of Big Data, the collection of large amounts of data is of vital importance. The more data that is processed, the more time you can save with data crunching.[2]

Web Links