FORCE =TRUE isn't supported for Snowpipe, as specified in the Usage Notes: https://docs.snowflake.com/en/sql-reference/sql/create-pipe#usage-notes
A Data Engineer is implementing a near real-time ingestion pipeline to load data into Snowflake using the Snowflake Kafka connector. There will be three Kafka topics created.
Which Snowflake objects are created automatically when the Kafka connector starts? (Choose three.)
acd
What is the purpose of the BUILD_STAGE_FILE_URL function in Snowflake?
c
A Data Engineer needs to load JSON output from some software into Snowflake using Snowpipe.
Which recommendations apply to this scenario? (Choose three.)
bde
A company has an extensive script in Scala that transforms data by leveraging DataFrames. A Data Engineer needs to move these transformations to Snowpark.
What characteristics of data transformations in Snowpark should be considered to meet this requirement? (Choose two.)
ab
Which output is provided by both the SYSTEM$CLUSTERING_DEPTH function and the SYSTEM$CLUSTERING_INFORMATION function?
a
Given the table SALES which has a clustering key of column CLOSED_DATE, which table function will return the average clustering depth for the SALES_REPRESENTATIVE column for the North American region?
b
Which Snowflake objects does the Snowflake Kafka connector use? (Choose three.)
ade
A table is loaded using Snowpipe and truncated afterwards. Later, a Data Engineer finds that the table needs to be reloaded, but the metadata of the pipe will not allow the same files to be loaded again.
How can this issue be solved using the LEAST amount of operational overhead?
c
FORCE =TRUE isn't supported for Snowpipe, as specified in the Usage Notes: https://docs.snowflake.com/en/sql-reference/sql/create-pipe#usage-notes
The following is returned from SYSTEM$CLUSTERING_INFORMATION() for a table named ORDERS with a DATE column named O_ORDERDATE:
What does the total_constant_partition_count value indicate about this table?
a
What is a characteristic of the use of external tokenization?
d
https://docs.snowflake.com/en/user-guide/security-column-ext-token-intro
External Tokenization benefits
The following summarizes some of the key benefits of External Tokenization.
Pre-load Tokenized Data
Using a tokenization provider, tokenized data is pre-loaded into Snowflake. Therefore, even without applying a masking policy to a column in a table or view, users never see the real data value. This provides enhanced data security to the most sensitive data in your organization.
When using the Snowflake Kafka Connector, several Snowflake objects are automatically created to facilitate data ingestion. Here's what happens:
Internal Stages (D) – For each Kafka topic, an internal stage is created to temporarily hold the streamed data before loading it into Snowflake tables.
Pipes (C) – A Snowpipe is created (with a pipe object) for each topic. This pipe monitors the internal stage and loads data into the target table.
Tables (A) – Tables can be automatically created, but only if the auto.create flag is set to true in the connector configuration. Otherwise, the tables must be created manually before ingestion begins.