Friday, 30 May 2008

Neeraj Nathani SmartBridge Trading Solutions Pvt Ltd

(Neeraj Nathani SmartBridge Trading Solutions Pvt Ltd)

Big Data Technology Evaluation Checklist

Anyone who’s been following the rapid-fire technology developments in the world that is becoming known as “big data” sees a new capability, product, or company founded literally every week. The ambition of all of these players, established and newcomer, is tremendous, because the potential value to business is enormous. Each new arrival is aimed at addressing the pain that enterprises are experiencing around unrelenting growth in the velocity, volume, and variety of the data their operations generate.
What’s being lost, however, in some of this frothy marketing activity, is that it’s still early for big data technologies. There are vexing problems slowing the growth and the practical implementation of big data technologies. For the technologies to succeed at scale, there are several fundamental capabilities they should contain, including stream processing, parallelization, indexing, data evaluation environments and visualization.
When evaluating big data technology, it can be valuable to ask companies about their ability to deliver some of these fundamental capabilities. If you get an unsophisticated answer, you may find that the company is not as serious or capable as you might have expected. (For my research on this topic please see: Designing a Scalable and Agile Big Data Platform.)
In this article, we examine some of the big data requirements that are partially defined or in early stages of maturity. Any big data vendor worth considering should be able to address these requirements now or in the near future, or confidently explain their position. We sat down with Mike Driscoll, CTO of Metamarkets, a big data company that delivers predictive analytics solutions for digital media, to develop a checklist for evaluating new solutions and their fit criteria against the challenges of big data:
Some general questions to begin the evaluation process:
  • Does the solution allow for stream processing, and incremental calculation of statistics?
  • Does the solution parallelize processing and take advantage of distributed computing?
  • Does the solution perform summary indexing to accelerate queries of huge datasets?
  • What are the solution’s data exploration and evaluation environments that enable a quick understanding of the value of new datasets?
  • How does a solution directly provide or easily integrate with visualization tools?
  • What is the strategy for verticalization of the technology?
  • What is the ecosystem strategy? How does the solution provider fill the gaps in its capabilities through partnerships?
Getting these questions answered will put most vendors to the test and help improve your understanding of the technology you are evaluating.
Stream processing
As the pace of business has increased, and the number of instrumented business processes has expanded, increasingly our attention is focused not on “data sets,” but on “data streams.”
“Decision-makers are interested putting their finger on the pulse of their organization, but to get answers in real time; they require architectures that can process streams of data as they happen,” Driscoll says. “Current database technologies are not well suited to do this kind of stream processing.”
For example, calculating an average over a group of data can be done in a traditional batch process, but far more efficient algorithms exist for calculating a moving average of data as it arrives, incrementally, unit by unit. If you want to take a repository of data and perform almost any statistical analysis, that can be accomplished with open source products like R or commercial products like SAS. But if you want to create a set of streaming statistics, to which you incrementally add or remove a chunk of data as a moving average, the libraries either don’t exist or are immature.
“The entire ecosystem around streaming data is underdeveloped,” says Driscoll.
In other words, if you’re talking to a vendor about a big data project, you have to determine whether this kind of stream processing is important to your project, and if it is, whether they have a capability to provide it. This axiom extends all the way down, to not just the analytical algorithms that run over the streams, but also to the way in which those streams are queued, ingested, managed and ultimately processed Many architectures exist for ingesting and queuing data streams, some of which are proprietary. TIBCO, Esper and ZeroMQ all offer solutions. But those solutions are only about moving packets of data around. Actually doing analysis of the streams requires practitioners to build on a lower level, for which there is another rapidly evolving toolset.
Parallelization
There are many definitions of big data. Here’s a useful one: “Small data” is data that fits in-memory on a single desktop or machine with a capacity of between 1 GB and 10 GB of disk space. “Medium data” fits on a single hard drive of 100 GB to 1 TB in size. “Large data” is distributed over many machines, comprising 1 TB to multiple petabytes.
“If you want to work with distributed data, and you expect to have any hope of processing that data in a reasonable amount of time, that requires distributed processing,” Driscoll says.
Parallel processing comes to the fore in distributed data. Hadoop is one of the better-known examples of distributed or parallelized processing. Hadoop can do more than distributed processing. It can also conduct distributed queries, which have been a subject of interest recently for designers of massively parallel processing (MPP) databases, whose object is to take a query and parallelize it across a set of nodes. Each of those nodes does partial work for the query in parallel, and then combines those partial answers into a single unified answer, Driscoll explains. Parallelizing queries is not a simple affair. When something is being analyzed in a parallel stream, each new unit of data must be combined with an existing unit of data in order to produce an answer.
Therefore, if a vendor is trying to sell you a solution for addressing big data at scale, their salespeople should be able to articulate their special secret sauce and strategy for parallelization.
“One of the most important features that current data warehouse vendors must offer is the ability to do parallel copy from Hadoop into their warehouse,” Driscoll says. “So, whether it’s EMC Greenplum offering the ability to do distributed parallel copy from Hadoop to Greenplum, Netezza or Oracle, the ability to parallelize data transfer is a critical feature.”
Hadoop has a massively distributed file system and can support distributed queries on top of it. But it does not inherently support parallelization—running a parallel process on Hadoop without an algorithm for optimizing queries can significantly slow down the process, taking minutes to return an answer. This is acceptable for some queries, but it won’t support real-time analytics in a big data world. The power and speed of that algorithm will be a determining factor of the robustness and cost of the solution, and that should be appropriately scaled to your needs, says Driscoll.
Summary indexing
Summary indexing is the process of creating a pre-calculated summary of data to speed up running queries. The problem with summary indexing is that it requires you to plan in advance what kind of queries you are going to run, so it is limiting.
The most common form of summary indexing is the star schema used to support speedy searches in data warehouses. A star schema prioritizes one master dimension (such as location or product) in advance of running a multi-dimensional data cube, organizing subordinate dimensions in relation to the master. The technique works well, but has one huge problem. If you want to ask a new question, it takes a long time to reconfigure the schema and associated data cubes and recompute them. When we begin to ask questions of all our data, not just the structured data, it is practically impossible to create a star schema to answer every possible question. The problem is not pre-processing; it’s the difficulty of reconfiguring the pre-processing as needed to ask new questions and get a speedy response.
The ideal solution would easily adjust the summaries being created as new questions arose. With a quickly created, summarized form of the data, it would then be possible to use data-analysis tools such as QlikView, Tableau, or TIBCO Spotfire for exploration and analysis. But there is currently a gap in available tools to make this summary creation easier, as many available tools don’t reach down to machine-level data, says Driscoll. The result is that the IT department becomes involved in building a custom query.

Some help is on the way for this problem. Vendors such as Splunk have emerged with a solution based on their search language that makes creating summary indexes far faster than other approaches, like star schemas. The designers of technology like SAP HANA, 1010 Data, and Metamarkets recommend an in-memory approach that completely abandons summarizing, by keeping vast amounts of data in in-memory systems.
But data volumes are growing fast and the need for summarizing will never go away completely. For the short and medium term, vendors must have a strategy for agile creation of summary indexes.
Data evaluation environments
How does your vendor’s solution understand, or allow you to understand the meaning of new datasets and incorporate them into your analysis?
For example, if your business has a retail store, and you are studying transactions, you could pick up the movements of people around the web site, or even around brick-and-mortar stores through opt-in GPS signals and cell-phone tracking. Once you acquire the data:
  • How do you incorporate and understand what that data can tell you?
  • How do you develop a model of the store that will help you analyze customer behavior?
  • How do you understand when those movements become events?
  • How fast can you figure that out—before the customer leaves the store?
To answer these questions, your solution needs to be able to join disparate datasets. Very few vendors have a distinguished capability to join datasets. And, just as the most critical areas of a building are at the joints, the same is true of data architecture. Where data interfaces, tremendous value can be unlocked, Driscoll says.
“The holy grail for anyone in the online retail space is to understand the connection between online impression events that lead to actions, such as clicks, or some level of engagement that eventually lead to purchasing behaviors, which eventually lead to long-term customer adoption,” Driscoll says. “Right now, all of these datasets live in different places. American Express knows where you’ve purchased your Starbucks coffee and Foursquare knows where you checked in at Starbucks, and Yahoo! knows when you clicked on an ad discount for a Starbucks latte on a hot summer day. And yet, people are struggling to thread these various data streams together.”
As enterprises begin to draw in disparate data feeds—particularly mobile data feeds—it’s critical that a vendor has a solution for rapidly joining disparate datasets, because the information they contain is critical for enterprises.
Visualization
There are two broad categories of visualization tools, according to Driscoll.
Exploratory visualization describes tools that allow a decision-maker and an analyst to explore different axes of the data for relationships, which usually involves some kind of visual “mining for insights.” Tools such as Tableau and TIBCO Spotfire, and to a lesser extent QlikView, fit into this category, Driscoll says.
Narrative visualizations are designed to examine a particular axis of the data in a particular way. For instance, if you say want to look at a time series visualization of sales broken up by geography for an enterprise, a format for that visualization can be pre-created. The data can be played back month-by-month for every geography, and is sorted into a pre-cast formula. Vendors such as Perceptive Pixel fit into this category.
In narrative visualizations, “Certain knobs are free to explore the data, but it’s not completely open to ask any question,” Driscoll says. “These visualizations are designed to tell a certain story about the data, just as certain pre-computations or reports are designed to tell a certain story. And some tools are better for the first, ad-hoc exploratory model, and others are better for the second, constrained narrative.”
Mind the Verticals
There are nearly as many types of decision-making needs in different verticals as there are ways to collect, process and analyze data. Vendors should be wary that different decisions makers within an organization and between verticals have different kinds of visualizations that they are accustomed to seeing.

“Any vendor that wants to serve the needs of those decisions-makers ought to be well aware of what those expected narratives are, because that will speed the adoption of that visualization tool,” Driscoll says, citing the preference for candlestick plots in the financial services industry.
Ecosystem Strategy
The largest most successful companies all spend tens of millions creating ecosystems around their products. The ecosystems are supported by product features and business models that allow the product to do its job but also work with other technologies or partners who extend the product or craft it to special uses. If a product doesn’t have an ecosystem strategy, you may find that it is hard to adapt to your needs and that finding expertise to help with implementation and configuration may be hard to come by.
This list of requirements for big data technology is not exhaustive, but it is a good start. Using these topics when evaluating big data technology will only lead to deeper understanding.
















Source: Wikipedia.

Wednesday, 28 May 2008

Big Data Technology Evaluation Checklist Neeraj Nathani SmartBridge


Big Data Technology Evaluation Checklist














Anyone who’s been following the rapid-fire technology developments in the world that is becoming known as “big data” sees a new capability, product, or company founded literally every week. The ambition of all of these players, established and newcomer, is tremendous, because the potential value to business is enormous. Each new arrival is aimed at addressing the pain that enterprises are experiencing around unrelenting growth in the velocity, volume, and variety of the data their operations generate.
What’s being lost, however, in some of this frothy marketing activity, is that it’s still early for big data technologies. There are vexing problems slowing the growth and the practical implementation of big data technologies. For the technologies to succeed at scale, there are several fundamental capabilities they should contain, including stream processing, parallelization, indexing, data evaluation environments and visualization.
When evaluating big data technology, it can be valuable to ask companies about their ability to deliver some of these fundamental capabilities. If you get an unsophisticated answer, you may find that the company is not as serious or capable as you might have expected. (For my research on this topic please see: Designing a Scalable and Agile Big Data Platform.)
In this article, we examine some of the big data requirements that are partially defined or in early stages of maturity. Any big data vendor worth considering should be able to address these requirements now or in the near future, or confidently explain their position. We sat down with Mike Driscoll, CTO of Metamarkets, a big data company that delivers predictive analytics solutions for digital media, to develop a checklist for evaluating new solutions and their fit criteria against the challenges of big data:
Some general questions to begin the evaluation process:
  • Does the solution allow for stream processing, and incremental calculation of statistics?
  • Does the solution parallelize processing and take advantage of distributed computing?
  • Does the solution perform summary indexing to accelerate queries of huge datasets?
  • What are the solution’s data exploration and evaluation environments that enable a quick understanding of the value of new datasets?
  • How does a solution directly provide or easily integrate with visualization tools?
  • What is the strategy for verticalization of the technology?
  • What is the ecosystem strategy? How does the solution provider fill the gaps in its capabilities through partnerships?
Getting these questions answered will put most vendors to the test and help improve your understanding of the technology you are evaluating.
Stream processing
As the pace of business has increased, and the number of instrumented business processes has expanded, increasingly our attention is focused not on “data sets,” but on “data streams.”
“Decision-makers are interested putting their finger on the pulse of their organization, but to get answers in real time; they require architectures that can process streams of data as they happen,” Driscoll says. “Current database technologies are not well suited to do this kind of stream processing.”
For example, calculating an average over a group of data can be done in a traditional batch process, but far more efficient algorithms exist for calculating a moving average of data as it arrives, incrementally, unit by unit. If you want to take a repository of data and perform almost any statistical analysis, that can be accomplished with open source products like R or commercial products like SAS. But if you want to create a set of streaming statistics, to which you incrementally add or remove a chunk of data as a moving average, the libraries either don’t exist or are immature.
“The entire ecosystem around streaming data is underdeveloped,” says Driscoll.
In other words, if you’re talking to a vendor about a big data project, you have to determine whether this kind of stream processing is important to your project, and if it is, whether they have a capability to provide it. This axiom extends all the way down, to not just the analytical algorithms that run over the streams, but also to the way in which those streams are queued, ingested, managed and ultimately processed Many architectures exist for ingesting and queuing data streams, some of which are proprietary. TIBCO, Esper and ZeroMQ all offer solutions. But those solutions are only about moving packets of data around. Actually doing analysis of the streams requires practitioners to build on a lower level, for which there is another rapidly evolving toolset.
Parallelization
There are many definitions of big data. Here’s a useful one: “Small data” is data that fits in-memory on a single desktop or machine with a capacity of between 1 GB and 10 GB of disk space. “Medium data” fits on a single hard drive of 100 GB to 1 TB in size. “Large data” is distributed over many machines, comprising 1 TB to multiple petabytes.
“If you want to work with distributed data, and you expect to have any hope of processing that data in a reasonable amount of time, that requires distributed processing,” Driscoll says.
Parallel processing comes to the fore in distributed data. Hadoop is one of the better-known examples of distributed or parallelized processing. Hadoop can do more than distributed processing. It can also conduct distributed queries, which have been a subject of interest recently for designers of massively parallel processing (MPP) databases, whose object is to take a query and parallelize it across a set of nodes. Each of those nodes does partial work for the query in parallel, and then combines those partial answers into a single unified answer, Driscoll explains. Parallelizing queries is not a simple affair. When something is being analyzed in a parallel stream, each new unit of data must be combined with an existing unit of data in order to produce an answer.
Therefore, if a vendor is trying to sell you a solution for addressing big data at scale, their salespeople should be able to articulate their special secret sauce and strategy for parallelization.
“One of the most important features that current data warehouse vendors must offer is the ability to do parallel copy from Hadoop into their warehouse,” Driscoll says. “So, whether it’s EMC Greenplum offering the ability to do distributed parallel copy from Hadoop to Greenplum, Netezza or Oracle, the ability to parallelize data transfer is a critical feature.”
Hadoop has a massively distributed file system and can support distributed queries on top of it. But it does not inherently support parallelization—running a parallel process on Hadoop without an algorithm for optimizing queries can significantly slow down the process, taking minutes to return an answer. This is acceptable for some queries, but it won’t support real-time analytics in a big data world. The power and speed of that algorithm will be a determining factor of the robustness and cost of the solution, and that should be appropriately scaled to your needs, says Driscoll.
Summary indexing
Summary indexing is the process of creating a pre-calculated summary of data to speed up running queries. The problem with summary indexing is that it requires you to plan in advance what kind of queries you are going to run, so it is limiting.
The most common form of summary indexing is the star schema used to support speedy searches in data warehouses. A star schema prioritizes one master dimension (such as location or product) in advance of running a multi-dimensional data cube, organizing subordinate dimensions in relation to the master. The technique works well, but has one huge problem. If you want to ask a new question, it takes a long time to reconfigure the schema and associated data cubes and recompute them. When we begin to ask questions of all our data, not just the structured data, it is practically impossible to create a star schema to answer every possible question. The problem is not pre-processing; it’s the difficulty of reconfiguring the pre-processing as needed to ask new questions and get a speedy response.
The ideal solution would easily adjust the summaries being created as new questions arose. With a quickly created, summarized form of the data, it would then be possible to use data-analysis tools such as QlikView, Tableau, or TIBCO Spotfire for exploration and analysis. But there is currently a gap in available tools to make this summary creation easier, as many available tools don’t reach down to machine-level data, says Driscoll. The result is that the IT department becomes involved in building a custom query.

Some help is on the way for this problem. Vendors such as Splunk have emerged with a solution based on their search language that makes creating summary indexes far faster than other approaches, like star schemas. The designers of technology like SAP HANA, 1010 Data, and Metamarkets recommend an in-memory approach that completely abandons summarizing, by keeping vast amounts of data in in-memory systems.
But data volumes are growing fast and the need for summarizing will never go away completely. For the short and medium term, vendors must have a strategy for agile creation of summary indexes.
Data evaluation environments
How does your vendor’s solution understand, or allow you to understand the meaning of new datasets and incorporate them into your analysis?
For example, if your business has a retail store, and you are studying transactions, you could pick up the movements of people around the web site, or even around brick-and-mortar stores through opt-in GPS signals and cell-phone tracking. Once you acquire the data:
  • How do you incorporate and understand what that data can tell you?
  • How do you develop a model of the store that will help you analyze customer behavior?
  • How do you understand when those movements become events?
  • How fast can you figure that out—before the customer leaves the store?
To answer these questions, your solution needs to be able to join disparate datasets. Very few vendors have a distinguished capability to join datasets. And, just as the most critical areas of a building are at the joints, the same is true of data architecture. Where data interfaces, tremendous value can be unlocked, Driscoll says.
“The holy grail for anyone in the online retail space is to understand the connection between online impression events that lead to actions, such as clicks, or some level of engagement that eventually lead to purchasing behaviors, which eventually lead to long-term customer adoption,” Driscoll says. “Right now, all of these datasets live in different places. American Express knows where you’ve purchased your Starbucks coffee and Foursquare knows where you checked in at Starbucks, and Yahoo! knows when you clicked on an ad discount for a Starbucks latte on a hot summer day. And yet, people are struggling to thread these various data streams together.”
As enterprises begin to draw in disparate data feeds—particularly mobile data feeds—it’s critical that a vendor has a solution for rapidly joining disparate datasets, because the information they contain is critical for enterprises.
Visualization
There are two broad categories of visualization tools, according to Driscoll.
Exploratory visualization describes tools that allow a decision-maker and an analyst to explore different axes of the data for relationships, which usually involves some kind of visual “mining for insights.” Tools such as Tableau and TIBCO Spotfire, and to a lesser extent QlikView, fit into this category, Driscoll says.
Narrative visualizations are designed to examine a particular axis of the data in a particular way. For instance, if you say want to look at a time series visualization of sales broken up by geography for an enterprise, a format for that visualization can be pre-created. The data can be played back month-by-month for every geography, and is sorted into a pre-cast formula. Vendors such as Perceptive Pixel fit into this category.
In narrative visualizations, “Certain knobs are free to explore the data, but it’s not completely open to ask any question,” Driscoll says. “These visualizations are designed to tell a certain story about the data, just as certain pre-computations or reports are designed to tell a certain story. And some tools are better for the first, ad-hoc exploratory model, and others are better for the second, constrained narrative.”
Mind the Verticals
There are nearly as many types of decision-making needs in different verticals as there are ways to collect, process and analyze data. Vendors should be wary that different decisions makers within an organization and between verticals have different kinds of visualizations that they are accustomed to seeing.

“Any vendor that wants to serve the needs of those decisions-makers ought to be well aware of what those expected narratives are, because that will speed the adoption of that visualization tool,” Driscoll says, citing the preference for candlestick plots in the financial services industry.
Ecosystem Strategy
The largest most successful companies all spend tens of millions creating ecosystems around their products. The ecosystems are supported by product features and business models that allow the product to do its job but also work with other technologies or partners who extend the product or craft it to special uses. If a product doesn’t have an ecosystem strategy, you may find that it is hard to adapt to your needs and that finding expertise to help with implementation and configuration may be hard to come by.
This list of requirements for big data technology is not exhaustive, but it is a good start. Using these topics when evaluating big data technology will only lead to deeper understanding.
Source: Wikipedia.

Friday, 23 May 2008

Data warehouse Neeraj-Nathani SmartBridge



Data warehouse















In computing, a data warehouse or enterprise data warehouse (DW, DWH, or EDW) is a database used for reporting and data analysis. It is a central repository of data which is created by integrating data from one or more disparate sources. Data warehouses store current as well as historical data and are used for creating trending reports for senior management reporting such as annual and quarterly comparisons.
The data stored in the warehouse are uploaded from the operational systems (such as marketing, sales etc., shown in the figure to the right). The data may pass through an operational data store for additional operations before they are used in the DW for reporting.
The typical ETL-based data warehouse uses staging, data integration, and access layers to house its key functions. The staging layer or staging database stores raw data extracted from each of the disparate source data systems. The integration layer integrates the disparate data sets by transforming the data from the staging layer often storing this transformed data in an operational data store (ODS) database. The integrated data are then moved to yet another database, often called the data warehouse database, where the data is arranged into hierarchical groups often called dimensions and into facts and aggregate facts. The combination of facts and dimensions is sometimes called a star schema. The access layer helps users retrieve data.[1]
A data warehouse constructed from an integrated data source systems does not require ETL, staging databases, or operational data store databases. The integrated data source systems may be considered to be a part of a distributed operational data store layer. Data federation methods or data virtualization methods may be used to access the distributed integrated source data systems to consolidate and aggregate data directly into the data warehouse database tables. Unlike the ETL-based data warehouse, the integrated source data systems and the data warehouse are all integrated since there is no transformation of dimensional or reference data. This integrated data warehouse architecture supports the drill down from the aggregate data of the data warehouse to the transactional data of the integrated source data systems.
Data warehouses can be subdivided into data marts. Data marts store subsets of data from a warehouse.
This definition of the data warehouse focuses on data storage. The main source of the data is cleaned, transformed, cataloged and made available for use by managers and other business professionals for data mining, online analytical processing, market research and decision support (Marakas & O'Brien 2009). However, the means to retrieve and analyze data, to extract, transform and load data, and to manage the data dictionary are also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition for data warehousing includes business intelligence tools, tools to extract, transform and load data into the repository, and tools to manage and retrieve metadata.

Benefits of a data warehouse

A data warehouse maintains a copy of information from the source transaction systems. This architectural complexity provides the opportunity to:
  • Congregates data from multiple sources into a single database so a single query engine can be used to present data.
  • Mitigates the problem of database isolation level lock contention in transaction processing systems caused by attempts to run large, long running, analysis queries in transaction processing databases.
  • Maintain data history, even if the source transaction systems do not.
  • Integrate data from multiple source systems, enabling a central view across the enterprise. This benefit is always valuable, but particularly so when the organization has grown by merger.
  • Improve data quality, by providing consistent codes and descriptions, flagging or even fixing bad data.
  • Present the organization's information consistently.
  • Provide a single common data model for all data of interest regardless of the data's source.
  • Restructure the data so that it makes sense to the business users.
  • Restructure the data so that it delivers excellent query performance, even for complex analytic queries, without impacting the operational systems.
  • Add value to operational business applications, notably customer relationship management (CRM) systems.

Generic data warehouse environment

The environment for data warehouses and marts includes the following:
  • Source systems that provide data to the warehouse or mart;
  • Data integration technology and processes that are needed to prepare the data for use;
  • Different architectures for storing data in an organization's data warehouse or data marts;
  • Different tools and applications for the variety of users;
  • Metadata, data quality, and governance processes must be in place to ensure that the warehouse or mart meets its purposes.
In regards to source systems listed above, Rainer states, “A common source for the data in data warehouses is the company’s operational databases, which can be relational databases” (130).
Regarding data integration, Rainer states, “It is necessary to extract data from source systems, transform them, and load them into a data mart or warehouse” (131).
Rainer discusses storing data in an organization’s data warehouse or data marts. “There are a variety of possible architectures to store decision-support data” (131).
Metadata are data about data. “IT personnel need information about data sources; database, table, and column names; refresh schedules; and data usage measures (133).
Today, the most successful companies are those that can respond quickly and flexibly to market changes and opportunities. A key to this response is the effective and efficient use of data and information by analysts and managers (Rainer, 127). A “data warehouse” is a repository of historical data that are organized by subject to support decision makers in the organization (128). Once data are stored in a data mart or warehouse, they can be accessed.

Facts

A fact is a value or measurement, which represents a fact about the managed entity or system.
Facts as reported by the reporting entity are said to be at raw level.
E.g. if a BTS received 1,000 requests for traffic channel allocation, it allocates for 820 and rejects the remaining then it would report 3 facts or measurements to a management system:
  • tch_req_total = 1000
  • tch_req_success = 820
  • tch_req_fail = 180
Facts at raw level are further aggregated to higher levels in various dimensions to extract more service or business-relevant information out of it. These are called aggregates or summaries or aggregated facts.
E.g. if there are 3 BTSs in a city, then facts above can be aggregated from BTS to city level in network dimension. E.g.
  • tch\_req\_success\_city = tch\_req\_success\_bts1 + tch\_req\_success\_bts2 + tch\_req\_success\_bts3
  • avg\_tch\_req\_success\_city = (tch\_req\_success\_bts1 + tch\_req\_success\_bts2 + tch\_req\_success\_bts3) / 3

[edit] Dimensional vs. normalized approach for storage of data

There are two leading approaches to storing data in a data warehouse — the dimensional approach and the normalized approach.
The dimensional approach, whose supporters are referred to as “Kimballites”, believe in Ralph Kimball’s approach in which it is stated that the data warehouse should be modeled using a Dimensional Model/star schema. The normalized approach, also called the 3NF model, whose supporters are referred to as “Inmonites”, believe in Bill Inmon's approach in which it is stated that the data warehouse should be modeled using an E-R model/normalized model.
In a dimensional approach, transaction data are partitioned into "facts", which are generally numeric transaction data, and "dimensions", which are the reference information that gives context to the facts. For example, a sales transaction can be broken up into facts such as the number of products ordered and the price paid for the products, and into dimensions such as order date, customer name, product number, order ship-to and bill-to locations, and salesperson responsible for receiving the order.
A key advantage of a dimensional approach is that the data warehouse is easier for the user to understand and to use. Also, the retrieval of data from the data warehouse tends to operate very quickly. Dimensional structures are easy to understand for business users, because the structure is divided into measurements/facts and context/dimensions. Facts are related to the organization’s business processes and operational system whereas the dimensions surrounding them contain context about the measurement (Kimball, Ralph 2008).
The main disadvantages of the dimensional approach are:
  1. In order to maintain the integrity of facts and dimensions, loading the data warehouse with data from different operational systems is complicated, and
  2. It is difficult to modify the data warehouse structure if the organization adopting the dimensional approach changes the way in which it does business.
In the normalized approach, the data in the data warehouse are stored following, to a degree, database normalization rules. Tables are grouped together by subject areas that reflect general data categories (e.g., data on customers, products, finance, etc.). The normalized structure divides data into entities, which creates several tables in a relational database. When applied in large enterprises the result is dozens of tables that are linked together by a web of joins. Furthermore, each of the created entities is converted into separate physical tables when the database is implemented (Kimball, Ralph 2008). The main advantage of this approach is that it is straightforward to add information into the database. A disadvantage of this approach is that, because of the number of tables involved, it can be difficult for users both to:
  1. join data from different sources into meaningful information and then
  2. access the information without a precise understanding of the sources of data and of the data structure of the data warehouse.
It should be noted that both normalized and dimensional models can be represented in entity-relationship diagrams as both contain joined relational tables. The difference between the two models is the degree of normalization.
These approaches are not mutually exclusive, and there are other approaches. Dimensional approaches can involve normalizing data to a degree (Kimball, Ralph 2008).
In Information-Driven Business (Wiley 2010),[6] Robert Hillard proposes an approach to comparing the two approaches based on the information needs of the business problem. The technique shows that normalized models hold far more information than their dimensional equivalents (even when the same fields are used in both models) but this extra information comes at the cost of usability. The technique measures information quantity in terms of Information Entropy and usability in terms of the Small Worlds data transformation measure.[7]

Source: Wikipedia.