Salesforce data cloud consultant practice test

Exam Title: Salesforce Certified Data Cloud Consultant

Last update: Nov 27 ,2025
Question 1

What does the Source Sequence reconciliation rule do in identity resolution?

  • A. Includes data from sources where the data is most frequently occurring
  • B. Identifies which individual records should be merged into a unified profile by setting a priority for specific data sources
  • C. Identifies which data sources should be used in the process of reconcillation by prioritizing the most recently updated data source
  • D. Sets the priority of specific data sources when building attributes in a unified profile, such as a first or last name
Answer:

D


Explanation:
: The Source Sequence reconciliation rule sets the priority of specific data sources when building
attributes in a unified profile, such as a first or last name. This rule allows you to define which data
source should be used as the primary source of truth for each attribute, and which data sources
should be used as fallbacks in case the primary source is missing or invalid. For example, you can set
the Source Sequence rule to use data from Salesforce CRM as the first priority, data from Marketing
Cloud as the second priority, and data from Google Analytics as the third priority for the first name
attribute. This way, the unified profile will use the first name value from Salesforce CRM if it exists,
otherwise it will use the value from Marketing Cloud, and so on. This rule helps you to ensure the
accuracy and consistency of the unified profile attributes across different data
sources. Reference:
Salesforce Data Cloud Consultant Exam Guide
,
Identity Resolution
,
Reconciliation
Rules

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 2

Which two dependencies prevent a data stream from being deleted?
Choose 2 answers

  • A. The underlying data lake object is used in activation.
  • B. The underlying data lake object is used in a data transform.
  • C. The underlying data lake object is mapped to a data model object.
  • D. The underlying data lake object is used in segmentation.
Answer:

B C


Explanation:
To delete a data stream in Data Cloud, the underlying data lake object (DLO) must not have any
dependencies or references to other objects or processes.
The following two dependencies prevent a
data stream from being deleted1
:
Data transform: This is a process that transforms the ingested data into a standardized format and
structure for the data model. A data transform can use one or more DLOs as input or output.
If a DLO
is used in a data transform, it cannot be deleted until the data transform is removed or modified2
.
Data model object: This is an object that represents a type of entity or relationship in the data model.
A data model object can be mapped to one or more DLOs to define its attributes and values.
If a DLO
is mapped to a data model object, it cannot be deleted until the mapping is removed or changed3
.
Reference:
:
Delete a Data Stream
article on Salesforce Help
: [Data Transforms in Data Cloud] unit on Trailhead
: [Data Model in Data Cloud] unit on Trailhead

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 3

What should a user do to pause a segment activation with the intent of using that segment
again?

  • A. Deactivate the segment.
  • B. Delete the segment.
  • C. Skip the activation.
  • D. Stop the publish schedule.
Answer:

A


Explanation:
The correct answer is A. Deactivate the segment. If a segment is no longer needed, it can be
deactivated through Data Cloud and applies to all chosen targets.
A deactivated segment no longer
publishes, but it can be reactivated at any time1
. This option allows the user to pause a segment
activation with the intent of using that segment again.
The other options are incorrect for the following reasons:
B . Delete the segment.
This option permanently removes the segment from Data Cloud and cannot
be undone2
. This option does not allow the user to use the segment again.
C . Skip the activation.
This option skips the current activation cycle for the segment, but does not
affect the future activation cycles3
. This option does not pause the segment activation indefinitely.
D . Stop the publish schedule.
This option stops the segment from publishing to the chosen targets,
but does not deactivate the segment4
. This option does not pause the segment activation
completely.
Reference:
:
Deactivated Segment
article on Salesforce Help
:
Delete a Segment
article on Salesforce Help
:
Skip an Activation
article on Salesforce Help
:
Stop a Publish Schedule
article on Salesforce Help

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 4

When creating a segment on an individual, what is the result of using two separate
containers linked by an AND as shown below?
GoodsProduct | Count | At Least | 1
Color | Is Equal To | red
AND
GoodsProduct | Count | At Least | 1
PrimaryProductCategory | Is Equal To | shoes

  • A. Individuals who purchased at least one of any red’ product and also purchased at least one pair of ‘shoes’
  • B. Individuals who purchased at least one 'red shoes' as a single line item in a purchase
  • C. Individuals who made a purchase of at least one 'red shoes’ and nothing else
  • D. Individuals who purchased at least one of any 'red' product or purchased at least one pair of 'shoes'
Answer:

A


Explanation:
: When creating a segment on an individual, using two separate containers linked by an AND means
that the individual must satisfy both the conditions in the containers. In this case, the individual must
have purchased at least one product with the color attribute equal to ‘red’ and at least one product
with the primary product category attribute equal to ‘shoes’. The products do not have to be the
same or purchased in the same transaction. Therefore, the correct answer is A.
The other options are incorrect because they imply different logical operators or conditions. Option B
implies that the individual must have purchased a single product that has both the color attribute
equal to ‘red’ and the primary product category attribute equal to ‘shoes’. Option C implies that the
individual must have purchased only one product that has both the color attribute equal to ‘red’ and
the primary product category attribute equal to ‘shoes’ and no other products. Option D implies that
the individual must have purchased either one product with the color attribute equal to ‘red’ or one
product with the primary product category attribute equal to ‘shoes’ or both, which is equivalent to
using an OR operator instead of an AND operator.
Reference:
Create a Container for Segmentation
Create a Segment in Data Cloud
Navigate Data Cloud Segmentation

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 5

What should an organization use to stream inventory levels from an inventory management
system into Data Cloud in a fast and scalable, near-real-time way?

  • A. Cloud Storage Connector
  • B. Commerce Cloud Connector
  • C. Ingestion API
  • D. Marketing Cloud Personalization Connector
Answer:

C


Explanation:
The Ingestion API is a RESTful API that allows you to stream data from any source into Data Cloud in
a fast and scalable way. You can use the Ingestion API to send data from your inventory management
system into Data Cloud as JSON objects, and then use Data Cloud to create data models, segments,
and insights based on your inventory data. The Ingestion API supports both batch and streaming
modes, and can handle up to 100,000 records per second. The Ingestion API also provides features
such as data validation, encryption, compression, and retry mechanisms to ensure data quality and
security. Reference:
Ingestion API Developer Guide
,
Ingest Data into Data Cloud

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 6

Northern Trail Outfitters (NTO), an outdoor lifestyle clothing brand, recently started a new
line of business. The new business specializes in gourmet camping food. For business reasons as well
as security reasons, it's important to NTO to keep all Data Cloud data separated by brand.
Which capability best supports NTO's desire to separate its data by brand?

  • A. Data streams for each brand
  • B. Data model objects for each brand
  • C. Data spaces for each brand
  • D. Data sources for each brand
Answer:

C


Explanation:
Data spaces are logical containers that allow you to separate and organize your data by different
criteria, such as brand, region, product, or business unit1
.
Data spaces can help you manage data
access, security, and governance, as well as enable cross-cloud data integration and activation2
. For
NTO, data spaces can support their desire to separate their data by brand, so that they can have
different data models, rules, and insights for their outdoor lifestyle clothing and gourmet camping
food businesses.
Data spaces can also help NTO comply with any data privacy and security
regulations that may apply to their different brands3
. The other options are incorrect because they
do not provide the same level of data separation and organization as data spaces.
Data streams are
used to ingest data from different sources into Data Cloud, but they do not separate the data by
brand4
.
Data model objects are used to define the structure and attributes of the data, but they do
not isolate the data by brand5
. Data sources are used to identify the origin and type of the data, but
they do not partition the data by brand. Reference:
Data Spaces Overview
,
Create Data Spaces
,
Data
Privacy and Security in Data Cloud
,
Data Streams Overview
,
Data Model Objects Overview
, [Data
Sources Overview]

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 7

Cumulus Financial created a segment called High Investment Balance Customers. This is a
foundational segment that includes several segmentation criteria the marketing team should
consistently use.
Which feature should the consultant suggest the marketing team use to ensure this consistency
when creating future, more refined segments?

  • A. Create new segments using nested segments.
  • B. Create a High Investment Balance calculated insight.
  • C. Package High Investment Balance Customers in a data kit.
  • D. Create new segments by cloning High Investment Balance Customers.
Answer:

A


Explanation:
Nested segments are segments that include or exclude one or more existing segments. They allow
the marketing team to reuse filters and maintain consistency in their data by using an existing
segment to build a new one. For example, the marketing team can create a nested segment that
includes High Investment Balance Customers and excludes customers who have opted out of email
marketing. This way, they can leverage the foundational segment and apply additional criteria
without duplicating the rules. The other options are not the best features to ensure consistency
because:
B . A calculated insight is a data object that performs calculations on data lake objects or CRM data
and returns a result. It is not a segment and cannot be used for activation or personalization.
C . A data kit is a bundle of packageable metadata that can be exported and imported across Data
Cloud orgs. It is not a feature for creating segments, but rather for sharing components.
D . Cloning a segment creates a copy of the segment with the same rules and filters. It does not allow
the marketing team to add or remove criteria from the original segment, and it may create confusion
and redundancy. Reference:
Create a Nested Segment - Salesforce
,
Save Time with Nested Segments
(Generally Available) - Salesforce
,
Calculated Insights - Salesforce
,
Create and Publish a Data Kit Unit
| Salesforce Trailhead
,
Create a Segment in Data Cloud - Salesforce

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 8

Cumulus Financial uses Service Cloud as its CRM and stores mobile phone, home phone,
and work phone as three separate fields for its customers on the Contact record. The company plans
to use Data Cloud and ingest the Contact object via the CRM Connector.
What is the most efficient approach that a consultant should take when ingesting this data to ensure
all the different phone numbers are properly mapped and available for use in activation?

  • A. Ingest the Contact object and map the Work Phone, Mobile Phone, and Home Phone to the Contact Point Phone data map object from the Contact data stream.
  • B. Ingest the Contact object and use streaming transforms to normalize the phone numbers from the Contact data stream into a separate Phone data lake object (DLO) that contains three rows, and then map this new DLO to the Contact Point Phone data map object.
  • C. Ingest the Contact object and then create a calculated insight to normalize the phone numbers, and then map to the Contact Point Phone data map object.
  • D. Ingest the Contact object and create formula fields in the Contact data stream on the phone numbers, and then map to the Contact Point Phone data map object.
Answer:

B


Explanation:
The most efficient approach that a consultant should take when ingesting this data to ensure all the
different phone numbers are properly mapped and available for use in activation is B. Ingest the
Contact object and use streaming transforms to normalize the phone numbers from the Contact data
stream into a separate Phone data lake object (DLO) that contains three rows, and then map this new
DLO to the Contact Point Phone data map object. This approach allows the consultant to use the
streaming transforms feature of Data Cloud, which enables data manipulation and transformation at
the time of ingestion, without requiring any additional processing or storage. Streaming transforms
can be used to normalize the phone numbers from the Contact data stream, such as removing
spaces, dashes, or parentheses, and adding country codes if needed. The normalized phone numbers
can then be stored in a separate Phone DLO, which can have one row for each phone number type
(work, home, mobile). The Phone DLO can then be mapped to the Contact Point Phone data map
object, which is a standard object that represents a phone number associated with a contact point.
This way, the consultant can ensure that all the phone numbers are available for activation, such as
sending SMS messages or making calls to the customers.
The other options are not as efficient as option B. Option A is incorrect because it does not normalize
the phone numbers, which may cause issues with activation or identity resolution. Option C is
incorrect because it requires creating a calculated insight, which is an additional step that consumes
more resources and time than streaming transforms. Option D is incorrect because it requires
creating formula fields in the Contact data stream, which may not be supported by the CRM
Connector or may cause conflicts with the existing fields in the Contact object. Reference:
Salesforce
Data Cloud Consultant Exam Guide
,
Data Ingestion and Modeling
,
Streaming Transforms
,
Contact
Point Phone

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 9

A customer has a Master Customer table from their CRM to ingest into Data Cloud. The
table contains a name and primary email address, along with other personally Identifiable
information (Pll).
How should the fields be mapped to support identity resolution?

  • A. Create a new custom object with fields that directly match the incoming table.
  • B. Map all fields to the Customer object.
  • C. Map name to the Individual object and email address to the Contact Phone Email object.
  • D. Map all fields to the Individual object, adding a custom field for the email address.
Answer:

C


Explanation:
To support identity resolution in Data Cloud, the fields from the Master Customer table should be
mapped to the standard data model objects that are designed for this purpose. The Individual object
is used to store the name and other personally identifiable information (PII) of a customer, while the
Contact Phone Email object is used to store the primary email address and other contact information
of a customer. These objects are linked by a relationship field that indicates the contact information
belongs to the individual. By mapping the fields to these objects, Data Cloud can use the identity
resolution rules to match and reconcile the profiles from different sources based on the name and
email address fields. The other options are not recommended because they either create a new
custom object that is not part of the standard data model, or map all fields to the Customer object
that is not intended for identity resolution, or map all fields to the Individual object that does not
have a standard email address field. Reference:
Data Modeling Requirements for Identity
Resolution
,
Create Unified Individual Profiles

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Question 10

Cloud Kicks received a Request to be Forgotten by a customer.
In which two ways should a consultant use Data Cloud to honor this request?
Choose 2 answers

  • A. Delete the data from the incoming data stream and perform a full refresh.
  • B. Add the Individual ID to a headerless file and use the delete from file functionality.
  • C. Use Data Explorer to locate and manually remove the Individual.
  • D. Use the Consent API to suppress processing and delete the Individual and related records from source data streams.
Answer:

B, D


Explanation:
: To honor a Request to be Forgotten by a customer, a consultant should use Data Cloud in two ways:
Add the Individual ID to a headerless file and use the delete from file functionality.
This option allows
the consultant to delete multiple Individuals from Data Cloud by uploading a CSV file with their
IDs1
.
The deletion process is asynchronous and can take up to 24 hours to complete1
.
Use the Consent API to suppress processing and delete the Individual and related records from
source data streams.
This option allows the consultant to submit a Data Deletion request for an
Individual profile in Data Cloud using the Consent API2
.
A Data Deletion request deletes the specified
Individual entity and any entities where a relationship has been defined between that entity’s
identifying attribute and the Individual ID attribute2
.
The deletion process is reprocessed at 30, 60,
and 90 days to ensure a full deletion2
. The other options are not correct because:
Deleting the data from the incoming data stream and performing a full refresh will not delete the
existing data in Data Cloud, only the new data from the source system3
.
Using Data Explorer to locate and manually remove the Individual will not delete the related records
from the source data streams, only the Individual entity in Data Cloud. Reference:
Delete Individuals from Data Cloud
Requesting Data Deletion or Right to Be Forgotten
Data Refresh for Data Cloud
[Data Explorer]

vote your answer:
A
B
C
D
A 0 B 0 C 0 D 0
Comments
Page 1 out of 16
Viewing questions 1-10 out of 170
Go To
page 2