Zingg
  • Welcome To Zingg
  • Step-By-Step Guide
    • Installation
      • Docker
        • Sharing Custom Data And Config Files
        • Shared Locations
        • File Read/Write Permissions
        • Copying Files To And From The Container
      • Installing From Release
        • Single Machine Setup
        • Spark Cluster Checklist
        • Installing Zingg
        • Verifying The Installation
      • Enterprise Installation for Snowflake
        • Setting up Zingg
        • Snowflake Properties
        • Match Configuration
        • Running Asynchronously
        • Verifying The Installation
      • Compiling From Source
    • Hardware Sizing
    • Zingg Runtime Properties
    • Zingg Command Line
    • Configuration
      • Configuring Through Environment Variables
      • Data Input And Output
        • Input Data
        • Output
      • Field Definitions
      • User Defined Mapping Match Types
      • Deterministic Matching
      • Pass Thru Data
      • Model Location
      • Tuning Label, Match And Link Jobs
      • Telemetry
    • Working With Training Data
      • Finding Records For Training Set Creation
      • Labeling Records
      • Find And Label
      • Using Pre-existing Training Data
      • Updating Labeled Pairs
      • Exporting Labeled Data
    • Verification of Blocking Model
    • Building And Saving The Model
    • Finding The Matches
    • Adding Incremental Data
    • Linking Across Datasets
    • Explanation of Models
    • Approval of Clusters
    • Combining Different Match Models
    • Model Difference
    • Persistent ZINGG ID
  • Data Sources and Sinks
    • Zingg Pipes
    • Databricks
    • Snowflake
    • JDBC
      • Postgres
      • MySQL
    • AWS S3
    • Cassandra
    • MongoDB
    • Neo4j
    • Parquet
    • BigQuery
    • Exasol
  • Working With Python
    • Python API
  • Running Zingg On Cloud
    • Running On AWS
    • Running On Azure
    • Running On Databricks
    • Running on Fabric
  • Zingg Models
    • Pre-Trained Models
  • Improving Accuracy
    • Ignoring Commonly Occuring Words While Matching
    • Defining Domain Specific Blocking And Similarity Functions
  • Documenting The Model
  • Interpreting Output Scores
  • Reporting Bugs And Contributing
    • Setting Up Zingg Development Environment
  • Community
  • Frequently Asked Questions
  • Reading Material
  • Security And Privacy
Powered by GitBook

@2021 Zingg Labs, Inc.

On this page

Was this helpful?

Edit on GitHub
  1. Step-By-Step Guide
  2. Configuration

Configuring Through Environment Variables

Passing Configuration value through the system environment variable

If a user does not want to pass the value of any JSON parameter through the config file for security reasons or otherwise, they can configure that value through the system environment variable. The system variable name needs to be put in the config file in place of its JSON value. At runtime, the config file gets updated with the value of the environment variable.

Below is the config file snippet that references a few environment variables.

"output" : [{
  "name":"unifiedCustomers", 
  "format":"net.snowflake.spark.snowflake",
  "props": {
    "location": "$location$",
    "delimiter": ",",
    "header": false,				
    "password": "$passwd",					
  }
}],

"labelDataSampleSize" : 0.5,
"numPartitions":4,
"modelId": $modelId$,
"zinggDir": "models",
"collectMetrics": $collectMetrics$

Environment variables must be enclosed within dollar signs $var$ to take effect. Also, the config file name must be suffixed with *.env. As usual, String variables need to be put within quotes "$var$", Boolean and Numeric values should be put without quotes $var$.

PreviousConfigurationNextData Input And Output

Last updated 1 month ago

Was this helpful?