Skip to main content
Version: Next

Kafka Connect

Integration Details

This plugin extracts the following:

  • Source and Sink Connectors in Kafka Connect as Data Pipelines
  • For Source connectors - Data Jobs to represent lineage information between source dataset to Kafka topic per {connector_name}:{source_dataset} combination
  • For Sink connectors - Data Jobs to represent lineage information between Kafka topic to destination dataset per {connector_name}:{topic} combination

Concept Mapping

This ingestion source maps the following Source System Concepts to DataHub Concepts:

Source ConceptDataHub ConceptNotes
"kafka-connect"Data Platform
ConnectorDataFlow
Kafka TopicDataset

Current limitations

Works only for

  • Source connectors: JDBC, Debezium, Mongo and Generic connectors with user-defined lineage graph
  • Sink connectors: BigQuery, Confluent, S3, Snowflake Certified

Important Capabilities

CapabilityStatusNotes
Detect Deleted EntitiesOptionally enabled via stateful_ingestion.remove_stale_metadata
Platform InstanceEnabled by default
Schema MetadataEnabled by default
Table-Level LineageEnabled by default

CLI based Ingestion

Install the Plugin

The kafka-connect source works out of the box with acryl-datahub.

Starter Recipe

Check out the following recipe to get started with ingestion! See below for full configuration options.

For general pointers on writing and running a recipe, see our main recipe guide.

source:
type: "kafka-connect"
config:
# Coordinates
connect_uri: "http://localhost:8083"

# Credentials
username: admin
password: password

# Optional
# Platform instance mapping to use when constructing URNs.
# Use if single instance of platform is referred across connectors.
platform_instance_map:
mysql: mysql_platform_instance

sink:
# sink configs

Config Details

Note that a . is used to denote nested fields in the YAML recipe.

FieldDescription
cluster_name
string
Cluster to ingest from.
Default: connect-cluster
connect_to_platform_map
map(str,map)
connect_uri
string
URI to connect to.
Default: http://localhost:8083/
convert_lineage_urns_to_lowercase
boolean
Whether to convert the urns of ingested lineage dataset to lowercase
Default: False
password
string
Kafka Connect password.
platform_instance
string
The instance of the platform that all assets produced by this recipe belong to. This should be unique within the platform. See https://datahubproject.io/docs/platform-instances/ for more details.
platform_instance_map
map(str,string)
username
string
Kafka Connect username.
env
string
The environment that all assets produced by this connector belong to
Default: PROD
connector_patterns
AllowDenyPattern
regex patterns for connectors to filter for ingestion.
Default: {'allow': ['.*'], 'deny': [], 'ignoreCase': True}
connector_patterns.ignoreCase
boolean
Whether to ignore case sensitivity during pattern matching.
Default: True
connector_patterns.allow
array
List of regex patterns to include in ingestion
Default: ['.*']
connector_patterns.allow.string
string
connector_patterns.deny
array
List of regex patterns to exclude from ingestion.
Default: []
connector_patterns.deny.string
string
generic_connectors
array
Provide lineage graph for sources connectors other than Confluent JDBC Source Connector, Debezium Source Connector, and Mongo Source Connector
Default: []
generic_connectors.GenericConnectorConfig
GenericConnectorConfig
generic_connectors.GenericConnectorConfig.connector_name 
string
generic_connectors.GenericConnectorConfig.source_dataset 
string
generic_connectors.GenericConnectorConfig.source_platform 
string
provided_configs
array
Provided Configurations
provided_configs.ProvidedConfig
ProvidedConfig
provided_configs.ProvidedConfig.path_key 
string
provided_configs.ProvidedConfig.provider 
string
provided_configs.ProvidedConfig.value 
string
stateful_ingestion
StatefulStaleMetadataRemovalConfig
Base specialized config for Stateful Ingestion with stale metadata removal capability.
stateful_ingestion.enabled
boolean
Whether or not to enable stateful ingest. Default: True if a pipeline_name is set and either a datahub-rest sink or datahub_api is specified, otherwise False
Default: False
stateful_ingestion.remove_stale_metadata
boolean
Soft-deletes the entities present in the last successful run but missing in the current run with stateful_ingestion enabled.
Default: True

Advanced Configurations

Working with Platform Instances

If you've multiple instances of kafka OR source/sink systems that are referred in your kafka-connect setup, you'd need to configure platform instance for these systems in kafka-connect recipe to generate correct lineage edges. You must have already set platform_instance in recipes of original source/sink systems. Refer the document Working with Platform Instances to understand more about this.

There are two options available to declare source/sink system's platform_instance in kafka-connect recipe. If single instance of platform is used across all kafka-connect connectors, you can use platform_instance_map to specify platform_instance to use for a platform when constructing URNs for lineage.

Example:

    # Map of platform name to platform instance
platform_instance_map:
snowflake: snowflake_platform_instance
mysql: mysql_platform_instance

If multiple instances of platform are used across kafka-connect connectors, you'd need to specify platform_instance to use for platform for every connector.

Example - Multiple MySQL Source Connectors each reading from different mysql instance

    # Map of platform name to platform instance per connector
connect_to_platform_map:
mysql_connector1:
mysql: mysql_instance1

mysql_connector2:
mysql: mysql_instance2

Here mysql_connector1 and mysql_connector2 are names of MySQL source connectors as defined in kafka-connect connector config.

Example - Multiple MySQL Source Connectors each reading from difference mysql instance and writing to different kafka cluster

    connect_to_platform_map:
mysql_connector1:
mysql: mysql_instance1
kafka: kafka_instance1

mysql_connector2:
mysql: mysql_instance2
kafka: kafka_instance2

You can also use combination of platform_instance_map and connect_to_platform_map in your recipe. Note that, the platform_instance specified for the connector in connect_to_platform_map will always take higher precedance even if platform_instance for same platform is set in platform_instance_map.

If you do not use platform_instance in original source/sink recipes, you do not need to specify them in above configurations.

Note that, you do not need to specify platform_instance for BigQuery.

Example - Multiple BigQuery Sink Connectors each writing to different kafka cluster

    connect_to_platform_map:
bigquery_connector1:
kafka: kafka_instance1

bigquery_connector2:
kafka: kafka_instance2

Provided Configurations from External Sources

Kafka Connect supports pluggable configuration providers which can load configuration data from external sources at runtime. These values are not available to DataHub ingestion source through Kafka Connect APIs. If you are using such provided configurations to specify connection url (database, etc) in Kafka Connect connector configuration then you will need also add these in provided_configs section in recipe for DataHub to generate correct lineage.

    # Optional mapping of provider configurations if using
provided_configs:
- provider: env
path_key: MYSQL_CONNECTION_URL
value: jdbc:mysql://test_mysql:3306/librarydb

Code Coordinates

  • Class Name: datahub.ingestion.source.kafka_connect.kafka_connect.KafkaConnectSource
  • Browse on GitHub

Questions

If you've got any questions on configuring ingestion for Kafka Connect, feel free to ping us on our Slack.