Step By Step Guide
Last updated
Last updated
Installation instructions for docker as well as github release are . If you need to build from the sources or compile for a different flavor of Spark, check
Decide your hardware based on the
Zingg needs a configuration file which defines the data and what kind of matching is needed. You can create the configuration file by following the instructions
Zingg builds a new set of models(blocking and similarity) for every new schema definition(columns and match types). This means running the findTrainingData and label phases multiple times to build the training dataset form which Zingg will learn. You can read more
The training data in Step 4 above is used to train Zingg and build and save the models. This is done by running the train phase. Read more
As long as your input columns and the field types are not changing, the same model should work and you do not need to build a new model. If you change the match type, you can continue to use the training data and add more labelled pairs on top of it.
Its now time to apply the model above on our data. This is done by running the match or the link phases depending on whether you are matching within a single source or linking multiple sources respectively. You can read more about and