Zingg-0.3.4
  • Welcome to Zingg
  • Step-By-Step Guide
    • Installation
      • Docker
        • Sharing custom data and config files
        • Shared locations
        • File read/write permissions
        • Copying Files To and From the Container
      • Installing From Release
        • Single Machine Setup
        • Spark Cluster Checklist
        • Installing Zingg
        • Verifying The Installation
      • Compiling From Source
    • Hardware Sizing
    • Zingg Runtime Properties
    • Zingg Command Line
    • Configuration
      • Configuring Through Environment Variables
      • Data Input and Output
        • Input Data
        • Output
      • Field Definitions
      • Model Location
      • Tuning Label, Match And Link Jobs
      • Telemetry
    • Working With Training Data
      • Finding Records For Training Set Creation
      • Labeling Records
      • Find And Label
      • Using pre-existing training data
      • Updating Labeled Pairs
      • Exporting Labeled Data
    • Building and saving the model
    • Finding the matches
    • Linking across datasets
  • Data Sources and Sinks
    • Zingg Pipes
    • Snowflake
    • JDBC
      • Postgres
      • MySQL
    • Cassandra
    • MongoDB
    • Neo4j
    • Parquet
    • BigQuery
  • Working With Python
  • Running Zingg on Cloud
    • Running on AWS
    • Running on Azure
    • Running on Databricks
  • Zingg Models
    • Pre-trained models
  • Improving Accuracy
    • Ignoring Commonly Occuring Words While Matching
    • Defining Domain Specific Blocking And Similarity Functions
  • Documenting The Model
  • Interpreting Output Scores
  • Reporting bugs and contributing
    • Setting Zingg Development Environment
  • Community
  • Frequently Asked Questions
  • Reading Material
  • Security And Privacy
Powered by GitBook
On this page
  • How much training is enough?
  • Do I need to train for every new dataset?
  • Do I need to use a Spark cluster or can I run on a single machine?
  • I don't have much background in ML or Spark. Can I still use Zingg?
  • Is Zingg an MDM?
  • Is Zingg a CDP ?
  • I can do Entity Resolution using a graph database like TigerGraph/Neo4J, why do I need Zingg ?

Frequently Asked Questions

Questions on your mind? Here are few of them answered!

PreviousCommunityNextReading Material

Last updated 2 years ago

How much training is enough?

Typically 30-40 positive pairs (matches) should build a good model. While marking records through the interactive learner, you can check Zingg's predictions for the shown pair. If they seem to be correct, you can pause and run Zingg in train and match phases to see what result you are getting. If not satisfied, you can always run the findTrainingData and label jobs again and they will pick from the last training round.

Do I need to train for every new dataset?

No, absolutely not! Train only if the schema(attributes or their types) has changed.

Do I need to use a Spark cluster or can I run on a single machine?

Depends on the data size you have. Check for more details.

I don't have much background in ML or Spark. Can I still use Zingg?

Very much! Zingg uses Spark and ML under the hood so that you don't have to worry about the rules and the scale.

Is Zingg an MDM?

No, Zingg is not an MDM. An MDM is the system of record, it has its own store where linked and mastered records are saved. Zingg enables MDM but is not a system of record. You can build an MDM in a data store of your choice using Zingg however.

Is Zingg a CDP ?

I can do Entity Resolution using a graph database like TigerGraph/Neo4J, why do I need Zingg ?

No, Zingg is not a CDP, as it does not stream events or customer data through different channels. Zingg does overlap with the CDPs identity resolution and building customer 360 views. Here is an describing how you can build your own CDP on the warehouse with Zingg.

Doing entity resolution in graph databases is easy only if you have trusted and high-quality identifiers like passport id, SSN id, etc. through which edges can be defined between different records. If you need fuzzy matching, you will have to build your own rules and algorithms with thresholds to define matching records. Zingg and Graph Databases go hand in hand for Entity Resolution. It is far easier to use Zingg and persist its graph output to a graph database and do further processing for AML, and KYC scenarios there. Read for details on how Zingg uses TigerGraph for Entity Resolution.

hardware sizing
article
the article