An organization uses an AI tool to scan social media for product reviews. Fraudulent social media
accounts begin posting negative reviews attacking the organization's product. Which type of AI attack
is MOST likely to have occurred?
C
Explanation:
The AAISM materials classify availability attacks as attempts to disrupt or degrade the functioning of
an AI system so that its outputs become unreliable or unusable. In this scenario, the fraudulent social
media accounts are deliberately overwhelming the AI tool with misleading negative reviews,
undermining its ability to deliver accurate sentiment analysis. This aligns directly with the concept of
an availability attack. Model inversion relates to reconstructing training data from outputs, deepfakes
involve synthetic content generation, and data poisoning corrupts the training set rather than
manipulating inputs at runtime. Therefore, the fraudulent review campaign is most accurately
identified as an availability attack.
Reference:
AAISM Study Guide – AI Risk Management (Adversarial Threats and Availability Risks)
ISACA AI Security Management – Attack Classifications
An attacker crafts inputs to a large language model (LLM) to exploit output integrity controls. Which
of the following types of attacks is this an example of?
A
Explanation:
According to the AAISM framework, prompt injection is the act of deliberately crafting malicious or
manipulative inputs to override, bypass, or exploit the model’s intended controls. In this case, the
attacker is targeting the integrity of the model’s outputs by exploiting weaknesses in how it
interprets and processes prompts. Jailbreaking is a subtype of prompt injection specifically designed
to override safety restrictions, while evasion attacks target classification boundaries in other ML
contexts, and remote code execution refers to system-level exploitation outside of the AI inference
context. The most accurate classification of this attack is prompt injection.
Reference:
AAISM Exam Content Outline – AI Technologies and Controls (Prompt Security and Input
Manipulation)
AI Security Management Study Guide – Threats to Output Integrity
An organization using an AI model for financial forecasting identifies inaccuracies caused by missing
data. Which of the following is the MOST effective data cleaning technique to improve model
performance?
B
Explanation:
The AAISM study content emphasizes that data quality management is a central pillar of AI risk
reduction. Missing data introduces bias and undermines predictive accuracy if not addressed
systematically. The most effective remediation is to apply statistical imputation and related methods
to fill in or adjust for missing values in a way that minimizes bias and preserves data integrity.
Retraining on flawed data does not solve the underlying issue. Deleting outliers may harm model
robustness, and hyperparameter tuning optimizes model mechanics but cannot resolve missing
information. Therefore, the proper corrective technique for missing data is the application of
statistical methods to reduce bias.
Reference:
AAISM Study Guide – AI Risk Management (Data Integrity and Quality Controls)
ISACA AI Governance Guidance – Data Preparation and Bias Mitigation
Which of the following is MOST important to consider when validating a third-party AI tool?
B
Explanation:
The AAISM framework specifies that when adopting third-party AI tools, the right to audit is the most
critical contractual and governance safeguard. This ensures that the organization can independently
verify compliance with security, privacy, and ethical requirements throughout the lifecycle of the
tool. Terms and conditions provide general usage guidance but often limit liability rather than
ensuring transparency. Industry certifications may indicate good practice but do not substitute for
direct verification. Roundtable testing is useful for evaluation but lacks enforceability. Only the
contractual right to audit provides formal assurance that the tool operates in accordance with
organizational policies and external regulations.
Reference:
AAISM Exam Content Outline – AI Governance and Program Management (Third-Party Governance)
AI Security Management Study Guide – Vendor Oversight and Audit Rights
Which of the following is the BEST mitigation control for membership inference attacks on AI
systems?
C
Explanation:
Membership inference attacks attempt to determine whether a particular data point was part of a
model’s training set, which risks violating privacy. The AAISM study guide highlights differential
privacy as the most effective mitigation because it introduces mathematical noise that obscures
individual contributions without significantly degrading model performance. Ensemble methods
improve robustness but do not specifically protect privacy. Threat modeling and red teaming help
identify risks but are not direct controls. The explicit mitigation control aligned with privacy
preservation for membership inference is differential privacy.
Reference:
AAISM Study Guide – AI Technologies and Controls (Privacy-Preserving Techniques)
ISACA AI Security Management – Membership Inference Mitigations
Which of the following types of testing can MOST effectively mitigate prompt hacking?
D
Explanation:
Prompt hacking manipulates large language models by injecting adversarial instructions into inputs
to bypass or override safeguards. The AAISM framework identifies adversarial testing as the most
effective way to simulate such manipulative attempts, expose vulnerabilities, and improve the
resilience of controls. Load testing evaluates performance, input testing checks format validation,
and regression testing validates functionality after changes. None of these directly address the
manipulation of natural language inputs. Adversarial testing is therefore the correct approach to
mitigate prompt hacking risks.
Reference:
AAISM Exam Content Outline – AI Risk Management (Testing and Assurance Practices)
AI Security Management Study Guide – Adversarial Testing Against Prompt Manipulation
Which of the following technologies can be used to manage deepfake risk?
C
Explanation:
The AAISM study material highlights blockchain as a control mechanism for managing deepfake risk
because it provides immutable verification of digital media provenance. By anchoring original data
signatures on a blockchain, organizations can verify authenticity and detect tampered or synthetic
content. Data tagging helps organize but does not guarantee authenticity. MFA and adaptive
authentication strengthen identity security but do not address content manipulation risks.
Blockchain’s immutability and traceability make it the recognized technology for mitigating deepfake
challenges.
Reference:
AAISM Study Guide – AI Technologies and Controls (Emerging Controls for Content Authenticity)
ISACA AI Governance Guidance – Blockchain for Data Integrity and Deepfake Mitigation
Which of the following would BEST help mitigate vulnerabilities associated with hidden triggers in
generative AI models?
C
Explanation:
Hidden triggers are adversarial backdoors planted in AI models, activated only by specific inputs. The
AAISM materials specify that the best mitigation is to use adversarial training, which deliberately
exposes the model to potential trigger inputs during training so it can learn to neutralize or resist
them. Retraining with diverse data reduces bias but does not address hidden triggers. Differential
privacy is focused on privacy preservation, not adversarial resilience. Monitoring outputs can help
with detection but is reactive rather than preventative. The proactive solution highlighted in the
study guide is adversarial training.
Reference:
AAISM Exam Content Outline – AI Risk Management (Backdoors and Hidden Triggers)
AI Security Management Study Guide – Adversarial Training as a Mitigation Control
An organization plans to apply an AI system to its business, but developers find it difficult to predict
system results due to lack of visibility to the inner workings of the AI model. Which of the following is
the GREATEST challenge associated with this situation?
A
Explanation:
AAISM materials identify explainability and transparency as the greatest challenge when models
operate as “black boxes” where inner logic is opaque. Inability to interpret how results are produced
undermines the trust of business users, customers, regulators, and auditors. Explainability is
emphasized as a critical governance requirement, because without it, ethical validation,
accountability, and regulatory compliance are at risk. Assigning risk owners or measuring transaction
times are operational concerns, but they do not address the core trust deficit caused by lack of
visibility. The greatest challenge in this situation is therefore the loss of end-user trust due to
insufficient explainability.
Reference:
AAISM Study Guide – AI Governance and Program Management (Transparency and Explainability)
ISACA AI Security Management – Ethical and Trust Considerations
Embedding unique identifiers into AI models would BEST help with:
B
Explanation:
The AAISM framework explains that embedding unique identifiers—such as digital watermarks or
model fingerprints—enables organizations to trace and verify model provenance. This technique is
used for tracking ownership and intellectual property rights over models, particularly when sharing,
licensing, or distributing AI systems. While identifiers may support certain security functions, their
primary control objective is ownership verification, not preventing access, bias removal, or
adversarial detection. The correct alignment with AAISM controls is tracking ownership.
Reference:
AAISM Exam Content Outline – AI Technologies and Controls (Model Provenance and Watermarking)
AI Security Management Study Guide – Ownership and Accountability of Models