Overview of


Rapid Data Processing

Speed of use is critical in your

investment decision making

A full C++ deployment and databases optimized for input/output operations enable incredibly fast performance in both processing and analyzing data.

It takes us only 800 milliseconds to generate around 4,000 factors on 28 years of data, allowing you to process & analyze more data.

Clean, Reliable Data

Reliable data increases trust in

insights generated by analysts

All raw data is stored in a warehouse to maintain integrity of the source data. We then apply a unified schema to process these disparate data sources in a homogenized way.

Data is transferred to an analytical database for error detection. During this process, we leverage our finance domain knowledge and familiarity with these databases to detect and clean errors from data supplied by vendors.

Our technology monitors data on an ongoing basis, ensuring that asset managers have access to the most up-to-date data for their analysis.

Flexible Architecture

Adding data sources is key for adapting to the

ever-changing market environment

162Grid combines financial datasets with varying schemas under one unified schema for easy use and richer analysis.

Having a modular deployment makes our platform extremely scalable, which allows us to adapt to new information needs extremely quickly when adding new tickers and mnemonics.

Factors can be added at any time with full customization, allowing users to build anything from simple ratios to complex Discounted Cash Flow analysis.


The team is comprised of:

  1. Mathematicians and statisticians to find relationships within the data.
  2. Engineers and high performance specialists to develop and maintain the infrastructure and build systems fast enough to process vast amounts of information.
  3. Financial professionals to identify, collect, and organize key financial factors for use by our clients.

This cross-disciplinary team drives our competitive advantage.

We have built proprietary algorithms to collect different types of data with their own unique schemas. This data is cleaned and standardized into a homogenized proprietary database that enables rich analysis.

We have extensive experience in working with multiple datasets including:

  • Equity fundamentals
  • Market pricing
  • Financial estimates
  • Corporate Actions

We use some of the fastest technology available (C++ and optimized databases) to produce factors at incredible speeds: 4000 factors in 800 milliseconds.

We are experienced in using high-performance computational infrastructure for both CPU intensive work (data cleaning, report generation, Exploratory Data Analysis), and GPU intensive work (Deep Learning, Natural Language Processing).

Our systems are optimized for populating and querying databases to ensure reliable data warehousing, ETL (extraction, transformation, and loading) processing, and data governance.