Products

Standardised Data Architecture

We provide data engineers specialised in building end-to-end data architectures from the operational business data sources to downstream data-applications. We connect the business operating systems to a centralised reporting datawarehouse with tools such as Airbyte, Hevo Data and Dataddo. For the datawarehousing we prefer using Google BigQuery due to its low cost and ultra-fast quering speed. The data transformations inside the datawarehouse are done using the tool dbt Cloud, which stores all logic in a repository like GitHub. This implies that we codify all your business logic, exceptions and definitions. After this step we setup the dimensional model and combine the dimensional and factual models into several OBTs, (One Big Table). These tables are storage heavy, but contain dozens of dimensions, which makes the analysis downstream significantly more intuitive for end-users. We visualise the data inside the warehouse through tools like PowerBI or Looker Studio, to validate that all code we put into place generates clean output data. As part of this step we embed data quality checks based on custom business logic requirements to make sure that there are no data quality errors for critical input fields. With a clean and consistent staging and factual layer available to use, we define the business entities, dimensions, measures and metrics that fits the domain model for your industry and business model. After this step, we aggregate the measures and metrics based on the entities that represent the axis around which your company runs its operations. Depending on the business model these entities are: accounts, projects, employees, departments, cost centers and teams. The downstream applications are connected through the external layer that is defined in the datawarehouse. Based on this external layer, we setup a connection with Cube which provides the semantic layer that enables headless analytics applications like Steep, Delphi and Klipfolio to provide their distinct value. We build the interface that allows near-realtime datafeeds into business spreadsheets that are used by sales, finance and operations to perform ad-hoc calculations and forecasting. All our domain models are based on our global best practices and the operational data and analytics data standard found on MetricHQ. The engineering of your data architecture has reached maturity when the SDA-score reaches > 95%. We base our decision to expand development on the basic setup when 3 core performance metric targets are met over the last 3 months.  1) the entry data quality test score needs to be >99.9%. and 3) the issue-resolve duration needs to be smaller than 48 hours. Depending on the tier-level (24 hrs, 1 hrs, 15 min) data freshness.
Product-code: SDA | Maturity-state: operational

Upstream Data Operations

After delivery of the Standardised Data Architecture (SDA), we usually agree on a Service Level Agreement to monitor and maintain all upstream data pipelines up until the staging layer in the data warehouse. The operator will perform version-upgrades to the extraction software, issue resolution when timeouts or source deprecations occur and the operator will take action when data-latencies are not hitting the expected set minimums. Per architecture, a dedicated operator will be reponsible that all performance metrics are hitting their targets. For each operator, there will be multiple fall-back operators that can jump-in when a high-priority issue emerges. The key metric for this product is the upstream-data-pipeline uptime, which should be >99.9% and the freshness-metric minimum which should be below either 24 hours, 1 hour or 15 minutes, depending on architecture requirements.
Product-code: UDO | Maturity-state: operational

Downstream Data Operations

Data applications can use the standardised and centralised data from the warehouse for many reasons. The operator will perform version-upgrades to the APIs, reverese loading applicaitons and pipeline tools used for the downstream applications. Similar to the upstream data operations, per architecture, a dedicated operator will be reponsible that all performance metrics are hitting their targets. Again, for each operator, there will be multiple fall-back operators that can jump-in when a high-priority issue emerges. The key metric for this product is the downstream-data-pipeline uptime, which should be >99.9%.
Product-code: DDO | Maturity-state: beta

End User Support

We offer direct support on the architecture we delivered through our support team. Our team can be reached by phone (+31 030 207 2961) during business hours 9:00 - 17:00 CET, through email, support@maxqanalytics.io or in the shared Slack channel we offer as part of our support agreement. Depending on the incoming issue, the task will be allocated to the respective specialist who holds ownership over the respective domain of the architecture. We aim to remedy each issue in 2-4 hours and if no remedy is found, the issue is escalated to the data architect of the respective system to contribute. You are always free to call or text the data engineer(s) that built the architecture, but response and remedy times cannot be commited to.
Product-code: EUS | Maturity-state: operational

Integral Governance Framework

Once the data architecture is fully operational, the first layer on top of the data model can be built, which consists out of modeling a set number of metrics per team. These metrics are then ordered based on hierarchy with the top 4 metrics per team being labeled as Key Metrics. The key metrics of each team are combined in a singular reporting setup, that is part of the monthly update that can be sent to the entire company, to enable your employees to be on the same page when it comes to which metrics are tracked and which one determining the success of the company. For all the metrics tracked in the governance framework the users define a target ratio and a minimum ratio, this enables c-suite executives, team-leads and employees to understand which metrics require most attention on the short and longer term, and which metrics are deteriorating and which ones are performing mediocre, but are stable. Once the framework is fully set, the company can observe 10s up to 100s of metrics and decide which metrics require most attention for their OKRs of that quarter. The existense of this setup enables OKRs and goal-setting to become full-circle integrated into the companies data model, since all key results are present in the data model, identified as important for company performance and medium attention has not improved the metric sufficently, so the metric is bumped to becme part of the quarterly or annual key results.
Product-code: IGF | Maturity-state: alpha

Outperform your metrics.

Schedule a CALL. We Won't Bite