Which one of the following affects the classification of data?
D
Explanation:
The passage of time is one of the factors that affects the classification of data. Data classification is
the process of assigning a level of sensitivity or criticality to data based on its value, impact, and legal
requirements. Data classification helps to determine the appropriate security controls and handling
procedures for the data. However, data classification is not static, but dynamic, meaning that it can
change over time depending on various factors. One of these factors is the passage of time, which
can affect the relevance, usefulness, or sensitivity of the data. For example, data that is classified as
confidential or secret at one point in time may become obsolete, outdated, or declassified at a later
point in time, and thus require a lower level of protection. Conversely, data that is classified as public
or unclassified at one point in time may become more valuable, sensitive, or regulated at a later
point in time, and thus require a higher level of protection. Therefore, data classification should be
reviewed and updated periodically to reflect the changes in the data over time.
The other options are not factors that affect the classification of data, but rather the outcomes or
components of data classification. Assigned security label is the result of data classification, which
indicates the level of sensitivity or criticality of the data. Multilevel Security (MLS) architecture is a
system that supports data classification, which allows different levels of access to data based on the
clearance and need-to-know of the users. Minimum query size is a parameter that can be used to
enforce data classification, which limits the amount of data that can be retrieved or displayed at a
time.
Which of the following BEST describes the responsibilities of a data owner?
D
Explanation:
The best description of the responsibilities of a data owner is determining the impact the
information has on the mission of the organization. A data owner is a person or entity that has the
authority and accountability for the creation, collection, processing, and disposal of a set of data. A
data owner is also responsible for defining the purpose, value, and classification of the data, as well
as the security requirements and controls for the data. A data owner should be able to determine the
impact the information has on the mission of the organization, which means assessing the potential
consequences of losing, compromising, or disclosing the data. The impact of the information on the
mission of the organization is one of the main criteria for data classification, which helps to establish
the appropriate level of protection and handling for the data.
The other options are not the best descriptions of the responsibilities of a data owner, but rather the
responsibilities of other roles or functions related to data management. Ensuring quality and
validation through periodic audits for ongoing data integrity is a responsibility of a data steward, who
is a person or entity that oversees the quality, consistency, and usability of the data. Maintaining
fundamental data availability, including data storage and archiving is a responsibility of a data
custodian, who is a person or entity that implements and maintains the technical and physical
security of the data. Ensuring accessibility to appropriate users, maintaining appropriate levels of
data security is a responsibility of a data controller, who is a person or entity that determines the
purposes and means of processing the data.
An organization has doubled in size due to a rapid market share increase. The size of the Information
Technology (IT) staff has maintained pace with this growth. The organization hires several contractors
whose onsite time is limited. The IT department has pushed its limits building servers and rolling out
workstations and has a backlog of account management requests.
Which contract is BEST in offloading the task from the IT staff?
B
Explanation:
Identity as a Service (IDaaS) is the best contract in offloading the task of account management from
the IT staff. IDaaS is a cloud-based service that provides identity and access management (IAM)
functions, such as user authentication, authorization, provisioning, deprovisioning, password
management, single sign-on (SSO), and multifactor authentication (MFA). IDaaS can help the
organization to streamline and automate the account management process, reduce the workload
and costs of the IT staff, and improve the security and compliance of the user accounts. IDaaS can
also support the contractors who have limited onsite time, as they can access the organization’s
resources remotely and securely through the IDaaS provider.
The other options are not as effective as IDaaS in offloading the task of account management from
the IT staff, as they do not provide IAM functions. Platform as a Service (PaaS) is a cloud-based
service that provides a platform for developing, testing, and deploying applications, but it does not
manage the user accounts for the applications. Desktop as a Service (DaaS) is a cloud-based service
that provides virtual desktops for users to access applications and data, but it does not manage the
user accounts for the virtual desktops. Software as a Service (SaaS) is a cloud-based service that
provides software applications for users to use, but it does not manage the user accounts for the
software applications.
When implementing a data classification program, why is it important to avoid too much granularity?
A
Explanation:
When implementing a data classification program, it is important to avoid too much granularity,
because the process will require too many resources. Data classification is the process of assigning a
level of sensitivity or criticality to data based on its value, impact, and legal requirements. Data
classification helps to determine the appropriate security controls and handling procedures for the
data. However, data classification is not a simple or straightforward process, as it involves many
factors, such as the nature, context, and scope of the data, the stakeholders, the regulations, and the
standards. If the data classification program has too many levels or categories of data, it will increase
the complexity, cost, and time of the process, and reduce the efficiency and effectiveness of the data
protection. Therefore, data classification should be done with a balance between granularity and
simplicity, and follow the principle of proportionality, which means that the level of protection
should be proportional to the level of risk.
The other options are not the main reasons to avoid too much granularity in data classification, but
rather the potential challenges or benefits of data classification. It will be difficult to apply to both
hardware and software is a challenge of data classification, as it requires consistent and compatible
methods and tools for labeling and protecting data across different types of media and devices. It will
be difficult to assign ownership to the data is a challenge of data classification, as it requires clear
and accountable roles and responsibilities for the creation, collection, processing, and disposal of
data. The process will be perceived as having value is a benefit of data classification, as it
demonstrates the commitment and awareness of the organization to protect its data assets and
comply with its obligations.
In a data classification scheme, the data is owned by the
B
Explanation:
In a data classification scheme, the data is owned by the business managers. Business managers are
the persons or entities that have the authority and accountability for the creation, collection,
processing, and disposal of a set of data. Business managers are also responsible for defining the
purpose, value, and classification of the data, as well as the security requirements and controls for
the data. Business managers should be able to determine the impact the information has on the
mission of the organization, which means assessing the potential consequences of losing,
compromising, or disclosing the data. The impact of the information on the mission of the
organization is one of the main criteria for data classification, which helps to establish the
appropriate level of protection and handling for the data.
The other options are not the data owners in a data classification scheme, but rather the other roles
or functions related to data management. System security managers are the persons or entities that
oversee the security of the information systems and networks that store, process, and transmit the
data. They are responsible for implementing and maintaining the technical and physical security of
the data, as well as monitoring and auditing the security performance and incidents. Information
Technology (IT) managers are the persons or entities that manage the IT resources and services that
support the business processes and functions that use the data. They are responsible for ensuring
the availability, reliability, and scalability of the IT infrastructure and applications, as well as providing
technical support and guidance to the users and stakeholders. End users are the persons or entities
that access and use the data for their legitimate purposes and needs. They are responsible for
complying with the security policies and procedures for the data, as well as reporting any security
issues or violations.
Which of the following is an initial consideration when developing an information security
management system?
B
Explanation:
When developing an information security management system (ISMS), an initial consideration is to
understand the value of the information assets that the organization owns or processes. An
information asset is any data, information, or knowledge that has value to the organization and
supports its mission, objectives, and operations. Understanding the value of the information assets
helps to determine the appropriate level of protection and investment for them, as well as the
potential impact and consequences of losing, compromising, or disclosing them. Understanding the
value of the information assets also helps to identify the stakeholders, owners, and custodians of the
information assets, and their roles and responsibilities in the ISMS.
The other options are not initial considerations, but rather subsequent or concurrent considerations
when developing an ISMS. Identifying the contractual security obligations that apply to the
organizations is a consideration that depends on the nature, scope, and context of the information
assets, as well as the relationships and agreements with the external parties. Identifying the level of
residual risk that is tolerable to management is a consideration that depends on the risk appetite and
tolerance of the organization, as well as the risk assessment and analysis of the information assets.
Identifying relevant legislative and regulatory compliance requirements is a consideration that
depends on the legal and ethical obligations and expectations of the organization, as well as the
jurisdiction and industry of the information assets.
Which of the following is an effective control in preventing electronic cloning of Radio Frequency
Identification (RFID) based access cards?
D
Explanation:
Asymmetric Card Authentication Key (CAK) challenge-response is an effective control in preventing
electronic cloning of RFID based access cards. RFID based access cards are contactless cards that use
radio frequency identification (RFID) technology to communicate with a reader and grant access to a
physical or logical resource. RFID based access cards are vulnerable to electronic cloning, which is the
process of copying the data and identity of a legitimate card to a counterfeit card, and using it to
impersonate the original cardholder and gain unauthorized access. Asymmetric CAK challenge-
response is a cryptographic technique that prevents electronic cloning by using public key
cryptography and digital signatures to verify the authenticity and integrity of the card and the reader.
Asymmetric CAK challenge-response works as follows:
The card and the reader each have a pair of public and private keys, and the public keys are
exchanged and stored in advance.
When the card is presented to the reader, the reader generates a random number (nonce) and sends
it to the card.
The card signs the nonce with its private key and sends the signature back to the reader.
The reader verifies the signature with the card’s public key and grants access if the verification is
successful.
The card also verifies the reader’s identity by requesting its signature on the nonce and checking it
with the reader’s public key.
Asymmetric CAK challenge-response prevents electronic cloning because the private keys of the card
and the reader are never transmitted or exposed, and the signatures are unique and non-reusable for
each transaction. Therefore, a cloned card cannot produce a valid signature without knowing the
private key of the original card, and a rogue reader cannot impersonate a legitimate reader without
knowing its private key.
The other options are not as effective as asymmetric CAK challenge-response in preventing
electronic cloning of RFID based access cards. Personal Identity Verification (PIV) is a standard for
federal employees and contractors to use smart cards for physical and logical access, but it does not
specify the cryptographic technique for RFID based access cards. Cardholder Unique Identifier
(CHUID) authentication is a technique that uses a unique number and a digital certificate to identify
the card and the cardholder, but it does not prevent replay attacks or verify the reader’s identity.
Physical Access Control System (PACS) repeated attempt detection is a technique that monitors and
alerts on multiple failed or suspicious attempts to access a resource, but it does not prevent the
cloning of the card or the impersonation of the reader.
Topic 3, Security Architecture and Engineering
Which security service is served by the process of encryption plaintext with the sender’s private key
and decrypting cipher text with the sender’s public key?
C
Explanation:
The security service that is served by the process of encrypting plaintext with the sender’s private
key and decrypting ciphertext with the sender’s public key is identification. Identification is the
process of verifying the identity of a person or entity that claims to be who or what it is.
Identification can be achieved by using public key cryptography and digital signatures, which are
based on the process of encrypting plaintext with the sender’s private key and decrypting ciphertext
with the sender’s public key. This process works as follows:
The sender has a pair of public and private keys, and the public key is shared with the receiver in
advance.
The sender encrypts the plaintext message with its private key, which produces a ciphertext that is
also a digital signature of the message.
The sender sends the ciphertext to the receiver, along with the plaintext message or a hash of the
message.
The receiver decrypts the ciphertext with the sender’s public key, which produces the same plaintext
message or hash of the message.
The receiver compares the decrypted message or hash with the original message or hash, and
verifies the identity of the sender if they match.
The process of encrypting plaintext with the sender’s private key and decrypting ciphertext with the
sender’s public key serves identification because it ensures that only the sender can produce a valid
ciphertext that can be decrypted by the receiver, and that the receiver can verify the sender’s
identity by using the sender’s public key. This process also provides non-repudiation, which means
that the sender cannot deny sending the message or the receiver cannot deny receiving the
message, as the ciphertext serves as a proof of origin and delivery.
The other options are not the security services that are served by the process of encrypting plaintext
with the sender’s private key and decrypting ciphertext with the sender’s public key. Confidentiality
is the process of ensuring that the message is only readable by the intended parties, and it is
achieved by encrypting plaintext with the receiver’s public key and decrypting ciphertext with the
receiver’s private key. Integrity is the process of ensuring that the message is not modified or
corrupted during transmission, and it is achieved by using hash functions and message
authentication codes. Availability is the process of ensuring that the message is accessible and usable
by the authorized parties, and it is achieved by using redundancy, backup, and recovery mechanisms.
Which of the following mobile code security models relies only on trust?
A
Explanation:
Code signing is the mobile code security model that relies only on trust. Mobile code is a type of
software that can be transferred from one system to another and executed without installation or
compilation. Mobile code can be used for various purposes, such as web applications, applets,
scripts, macros, etc. Mobile code can also pose various security risks, such as malicious code,
unauthorized access, data leakage, etc. Mobile code security models are the techniques that are
used to protect the systems and users from the threats of mobile code. Code signing is a mobile code
security model that relies only on trust, which means that the security of the mobile code depends
on the reputation and credibility of the code provider. Code signing works as follows:
The code provider has a pair of public and private keys, and obtains a digital certificate from a trusted
third party, such as a certificate authority (CA), that binds the public key to the identity of the code
provider.
The code provider signs the mobile code with its private key and attaches the digital certificate to the
mobile code.
The code consumer receives the mobile code and verifies the signature and the certificate with the
public key of the code provider and the CA, respectively.
The code consumer decides whether to trust and execute the mobile code based on the identity and
reputation of the code provider.
Code signing relies only on trust because it does not enforce any security restrictions or controls on
the mobile code, but rather leaves the decision to the code consumer. Code signing also does not
guarantee the quality or functionality of the mobile code, but rather the authenticity and integrity of
the code provider. Code signing can be effective if the code consumer knows and trusts the code
provider, and if the code provider follows the security standards and best practices. However, code
signing can also be ineffective if the code consumer is unaware or careless of the code provider, or if
the code provider is compromised or malicious.
The other options are not mobile code security models that rely only on trust, but rather on other
techniques that limit or isolate the mobile code. Class authentication is a mobile code security model
that verifies the permissions and capabilities of the mobile code based on its class or type, and
allows or denies the execution of the mobile code accordingly. Sandboxing is a mobile code security
model that executes the mobile code in a separate and restricted environment, and prevents the
mobile code from accessing or affecting the system resources or data. Type safety is a mobile code
security model that checks the validity and consistency of the mobile code, and prevents the mobile
code from performing illegal or unsafe operations.
Which technique can be used to make an encryption scheme more resistant to a known plaintext
attack?
D
Explanation:
Compressing the data before encryption is a technique that can be used to make an encryption
scheme more resistant to a known plaintext attack. A known plaintext attack is a type of cryptanalysis
where the attacker has access to some pairs of plaintext and ciphertext encrypted with the same key,
and tries to recover the key or decrypt other ciphertexts. A known plaintext attack can exploit the
statistical properties or patterns of the plaintext or the ciphertext to reduce the search space or guess
the key. Compressing the data before encryption can reduce the redundancy and increase the
entropy of the plaintext, making it harder for the attacker to find any correlations or similarities
between the plaintext and the ciphertext. Compressing the data before encryption can also reduce
the size of the plaintext, making it more difficult for the attacker to obtain enough plaintext-
ciphertext pairs for a successful attack.
The other options are not techniques that can be used to make an encryption scheme more resistant
to a known plaintext attack, but rather techniques that can introduce other security issues or
inefficiencies. Hashing the data before encryption is not a useful technique, as hashing is a one-way
function that cannot be reversed, and the encrypted hash cannot be decrypted to recover the
original data. Hashing the data after encryption is also not a useful technique, as hashing does not
add any security to the encryption, and the hash can be easily computed by anyone who has access
to the ciphertext. Compressing the data after encryption is not a recommended technique, as
compression algorithms usually work better on uncompressed data, and compressing the ciphertext
can introduce errors or vulnerabilities that can compromise the encryption.