Spoofing
An attacker could take over the port or socket that the server normally uses
2
We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Spoofing
An attacker could take over the port or socket that the server normally uses
2
Spoofing
An attacker could try one credential after another and there's nothing to slow them down (online or offline)
3
Spoofing
An attacker can anonymously connect, because we expect authentication to be done at a higher level
4
Spoofing
An attacker can confuse a client because there are too many ways to identify a server
5
Spoofing
An attacker can spoof a server because identifiers aren't stored on the client and checked for consistency on re-connection (that is, there's no key persistence)
6
Spoofing
An attacker can connect to a server or peer over a link that isn't authenticated (and encrypted)
7
Spoofing
An attacker could steal credentials stored on the server and reuse them (for example, a key is stored in a world readable file)
8
Spoofing
An attacker who gets a password can reuse it (Use stronger authenticators)
9
Spoofing
An attacker can choose to use weaker or no authentication
X
Spoofing
An attacker could steal credentials stored on the client and reuse them
Spoofing
An attacker could go after the way credentials are updated or recovered (account recovery doesn't require disclosing the old password)
Spoofing
Your system ships with a default admin password, and doesn't force a change
Spoofing
You've invented a new Spoofing attack
A
Tampering
An attacker can take advantage of your custom key exchange or integrity control which you built instead of using standard crypto
2
Tampering
An attacker can modify your build system and produce signed builds of your software
3
Tampering
Your code makes access control decisions all over the place, rather than with a security kernel
4
Tampering
An attacker can replay data without detection because your code doesn't provide timestamps or sequence numbers
5
Tampering
An attacker can write to a data store your code relies on
6
Tampering
An attacker can bypass permissions because you don't make names canonical before checking access permissions
7
Tampering
An attacker can manipulate data because there's no integrity protection for data on the network
8
Tampering
An attacker can provide or control state information
9
Tampering
An attacker can alter information in a data store because it has weak/open permissions or includes a group which is equivalent to everyone ("anyone with a Facebook account")
X
Tampering
An attacker can write to some resource because permissions are granted to the world or there are no ACLs
Tampering
An attacker can change parameters over a trust boundary and after validation (for example, important parameters in a hidden field in HTML, or passing a pointer to critical memory)
Tampering
An attacker can load code inside your process via an extension point
Tampering
You've invented a new Tampering attack
A
Repudiation
An attacker can pass data through the log to attack a log reader, and there's no documentation of what sorts of validation are done
2
Repudiation
A low privilege attacker can read interesting security information in the logs
3
Repudiation
An attacker can alter digital signatures because the digital signature system you're implementing is weak, or uses MACs where it should use a signature
4
Repudiation
An attacker can alter log messages on a network because they lack strong integrity controls
5
Repudiation
An attacker can create a log entry without a timestamp (or no log entry is timestamped)
6
Repudiation
An attacker can make the logs wrap around and lose data
7
Repudiation
An attacker can make a log lose or confuse security information
8
Repudiation
An attacker can use a shared key to authenticate as different principals, confusing the information in the logs
9
Repudiation
An attacker can get arbitrary data into logs from unauthenticated (or weakly authenticated) outsiders without validation
X
Repudiation
An attacker can edit logs and there's no way to tell (perhaps because there's no heartbeat option for the logging system)
Repudiation
An attacker can say "I didn't do that," and you'd have no way to prove them wrong
Repudiation
The system has no logs
Repudiation
You've invented a new Repudiation attack
A
Information Disclosure
An attacker can brute-force file encryption because there's no defense in place (example defense, password stretching)
2
Information Disclosure
An attacker can see error messages with security sensitive content
3
Information Disclosure
An attacker can read content because messages (say, an email or HTTP cookie) aren't encrypted even if the channel is encrypted
4
Information Disclosure
An attacker may be able to read a document or data because it's encrypted with a non-standard algorithm
5
Information Disclosure
An attacker can read data because it's hidden or occluded (for undo or change tracking) and the user might forget that it's there
6
Information Disclosure
An attacker can act as a 'man in the middle' because you don't authenticate endpoints of a network connection
7
Information Disclosure
An attacker can access information through a search indexer, logger, or other such mechanism
8
Information Disclosure
An attacker can read sensitive information in a file with permissive permissions
9
Information Disclosure
An attacker can read information in files or databases with no access controls
X
Information Disclosure
An attacker can discover the fixed key being used to encrypt
Information Disclosure
An attacker can read the entire channel because the channel (say, HTTP or SMTP) isn't encrypted
Information Disclosure
An attacker can read network information because there's no cryptography used
Information Disclosure
You've invented a new Information Disclosure attack
A
Denial of Service
An attacker can make your authentication system unusable or unavailable
2
Denial of Service
An attacker can drain our easily replacable battery
3
Denial of Service
An attacker can drain a battery that's hard to replace (sealed in a phone, an implanted medical device, or in a hard to reach location)
4
Denial of Service
An attacker can spend our cloud budget
5
Denial of Service
An attacker can make a server unavailable or unusable without ever authenticating but the problem goes away when the attacker stops
6
Denial of Service
An attacker can make a client unavailable or unusable and the problem persists after the attacker goes away
7
Denial of Service
An attacker can make a server unavailable or unusable and the problem persists after the attacker goes away
8
Denial of Service
An attacker can make a client unavailable or unusable without ever authenticating and the problem persists after the attacker goes away
9
Denial of Service
An attacker can make a server unavailable or unusable without ever authenticating and the problem persists after the attacker goes away
X
Denial of Service
An attacker can cause the logging subsystem to stop working
Denial of Service
An attacker can amplify a Denial of Service attack through this component with amplification on the order of 10 to 1
Denial of Service
An attacker can amplify a Denial of Service attack through this component with amplification on the order of 100 to 1
Denial of Service
You've invented a new Denial of Service attack
A
Elevation of Privilege
An attacker has compromised a key technology supplier
2
Elevation of Privilege
An attacker can access the cloud service which manages your devices
3
Elevation of Privilege
An attacker can escape from a container or other sandbox
4
Elevation of Privilege
An attacker can force data through different validation paths which give different results
5
Elevation of Privilege
An attacker could take advantage of permissions you set, but don't use
6
Elevation of Privilege
An attacker can provide a pointer across a trust boundary, rather than data which can be validated
7
Elevation of Privilege
An attacker can enter data that is checked while still under their control and used later on the other side of a trust boundary
8
Elevation of Privilege
There's no reasonable way for a caller to figure out what validation of tainted data you perform before passing it to them
9
Elevation of Privilege
There's no reasonable way for a caller to figure out what security assumptions you make
X
Elevation of Privilege
An attacker can reflect input back to a user, like cross site scripting
Elevation of Privilege
You include user-generated content within your page, possibly including the content of random URLs
Elevation of Privilege
An attacker can inject a command that the system will run at a higher privilege level
Elevation of Privilege
You've invented a new Elevation of Privilege attack
A
2
Brian can gather information about the underlying configurations, schemas, logic, code, software, services and infrastructure due to the content of error messages, or poor configuration, or the presence of default installation files or old, test, backup or copies of resources, or exposure of source code
3
Robert can input malicious data because the allowed protocol format is not being checked, or duplicates are accepted, or the structure is not being verified, or the individual data elements are not being validated for format, type, range, length and a whitelist of allowed characters or formats
4
Dave can input malicious field names or data because it is not being checked within the context of the current user and process
5
Jee can bypass the centralized encoding routines since they are not being used everywhere, or the wrong encodings are being used
6
Jason can bypass the centralized validation routines since they are not being used on all inputs
7
Jan can craft special payloads to foil input validation because the character set is not specified/enforced, or the data is encoded multiple times, or the data is not fully converted into the same format the application uses (e.g. canonicalization) before being validated, or variables are not strongly typed
8
Oana can bypass the centralized sanitization routines since they are not being used comprehensively
9
Shamun can bypass input validation or output validation checks because validation failures are not rejected and/or sanitized
10
Darío can exploit the trust the application places in a source of data (e.g. user-definable data, manipulation of locally stored data, alteration to state data on a client device, lack of verification of identity during data validation such as Darío can pretend to be Colin)
J
Toby has control over input validation, output validation or output encoding code or routines so they can be bypassed
Q
Xavier can inject data into a client or device side interpreter because a parameterised interface is not being used, or has not been implemented correctly, or the data has not been encoded correctly for the context, or there is no restrictive policy on code or data includes
K
Gabe can inject data into an server-side interpreter (e.g. SQL, OS commands, Xpath, Server JavaScript, SMTP) because a strongly typed parameterised interface is not being used or has not been implemented correctly
A
You have invented a new attack against Data Validation and Encoding
Read more about this topic in OWASP's free Cheat Sheets on Input Validation, XSS Prevention, DOM-based XSS Prevention, SQL Injection Prevention, and Query Parameterization
2
James can undertake authentication functions without the real user ever being aware this has occurred (e.g. attempt to log in, log in with stolen credentials, reset the password)
3
Muhammad can obtain a user's password or other secrets such as security questions, by observation during entry, or from a local cache, or from memory, or in transit, or by reading it from some unprotected location, or because it is widely known, or because it never expires, or because the user cannot change her own password
4
Sebastien can easily identify user names or can enumerate them
5
Javier can use default, test or easily guessable credentials to authenticate, or can use an old account or an account not necessary for the application
6
Sven can reuse a temporary password because the user does not have to change it on first use, or it has too long or no expiry, or it does not use an out-of-band delivery method (e.g. post, mobile app, SMS)
7
Cecilia can use brute force and dictionary attacks against one or many accounts without limit, or these attacks are simplified due to insufficient complexity, length, expiration and re-use requirements for passwords
8
Kate can bypass authentication because it does not fail secure (i.e. it defaults to allowing unauthenticated access)
9
Claudia can undertake more critical functions because authentication requirements are too weak (e.g. do not use strong authentication such as two factor), or there is no requirement to re-authenticate for these
10
Pravin can bypass authentication controls because a centralized standard, tested, proven and approved authentication module/framework/service, separate to the resource being requested, is not being used
J
Mark can access resources or services because there is no authentication requirement, or it was mistakenly assumed authentication would be undertaken by some other system or performed in some previous action
Q
Johan can bypass authentication because it is not enforced with equal rigor for all types of authentication functionality (e.g. register, password change, password recovery, log out, administration) or across all versions/channels (e.g. mobile website, mobile app, full website, API, call centre)
K
Olga can influence or alter authentication code/routines so they can be bypassed
A
You have invented a new attack against Authentication
Read more about this topic in OWASP's free Authentication Cheat Sheet
2
William has control over the generation of session identifiers
3
Ryan can use a single account in parallel since concurrent sessions are allowed
4
Alison can set session identification cookies on another web application because the domain and path are not restricted sufficiently
5
John can predict or guess session identifiers because they are not changed when the user's role alters (e.g. pre and post authentication) and when switching between non-encrypted and encrypted communications, or are not sufficiently long and random, or are not changed periodically
6
Gary can take over a user's session because there is a long or no inactivity timeout, or a long or no overall session time limit, or the same session can be used from more than one device/location
7
Graham can utilize Adam's session after he has finished, because there is no log out function, or he cannot easily log out, or log out does not properly terminate the session
8
Matt can abuse long sessions because the application does not require periodic re-authentication to check if privileges have changed
9
Ivan can steal session identifiers because they are sent over insecure channels, or are logged, or are revealed in error messages, or are included in URLs, or are accessible un-necessarily by code which the attacker can influence or alter
10
Marce can forge requests because per-session, or per-request for more critical actions, strong random tokens (i.e. anti-CSRF tokens) or similar are not being used for actions that change state
J
Jeff can resend an identical repeat interaction (e.g. HTTP request, signal, button press) and it is accepted, not rejected
Q
Salim can bypass session management because it is not applied comprehensively and consistently across the application
K
Peter can bypass the session management controls because they have been self-built and/or are weak, instead of using a standard framework or approved tested module
A
You have invented a new attack against Session Management
Read more about this topic in OWASP's free Cheat Sheets on Session Management, and Cross Site Request Forgery (CSRF) Prevention
2
Kyun can access data because it has been obfuscated rather than using an approved cryptographic function
3
Axel can modify transient or permanent data (stored or in transit), or source code, or updates/patches, or configuration data, because it is not subject to integrity checking
4
Paulo can access data in transit that is not encrypted, even though the channel is encrypted
5
Kyle can bypass cryptographic controls because they do not fail securely (i.e. they default to unprotected)
6
Romain can read and modify unencrypted data in memory or in transit (e.g. cryptographic secrets, credentials, session identifiers, personal and commercially-sensitive data), in use or in communications within the application, or between the application and users, or between the application and external systems
7
Gunter can intercept or modify encrypted data in transit because the protocol is poorly deployed, or weakly configured, or certificates are invalid, or certificates are not trusted, or the connection can be degraded to a weaker or un-encrypted communication
8
Eoin can access stored business data (e.g. passwords, session identifiers, PII, cardholder data) because it is not securely encrypted or securely hashed
9
Andy can bypass random number generation, random GUID generation, hashing and encryption functions because they have been self-built and/or are weak
10
Susanna can break the cryptography in use because it is not strong enough for the degree of protection required, or it is not strong enough for the amount of effort the attacker is willing to make
J
Justin can read credentials for accessing internal or external resources, services and others systems because they are stored in an unencrypted format, or saved in the source code
Q
Artim can access or predict the master cryptographic secrets
K
Dan can influence or alter cryptography code/routines (encryption, hashing, digital signatures, random number and GUID generation) and can therefore bypass them
A
You have invented a new attack against Cryptography
Read more about this topic in OWASP's free Cheat Sheets on Cryptographic Storage, and Transport Layer Protection
2
Lee can bypass application controls because dangerous/risky programming language functions have been used instead of safer alternatives, or there are type conversion errors, or because the application is unreliable when an external resource is unavailable, or there are race conditions, or there are resource initialization or allocation issues, or overflows can occur
3
Andrew can access source code, or decompile, or otherwise access business logic to understand how the application works and any secrets contained
4
Keith can perform an action and it is not possible to attribute it to him
5
Larry can influence the trust other parties including users have in the application, or abuse that trust elsewhere (e.g. in another application)
6
Aaron can bypass controls because error/exception handling is missing, or is implemented inconsistently or partially, or does not deny access by default (i.e. errors should terminate access/execution), or relies on handling by some other service or system
7
Mwengu's actions cannot be investigated because there is not an adequate accurately time-stamped record of security events, or there is not a full audit trail, or these can be altered or deleted by Mwengu, or there is no centralized logging service
8
David can bypass the application to gain access to data because the network and host infrastructure, and supporting services/applications, have not been securely configured, the configuration rechecked periodically and security patches applied, or the data is stored locally, or the data is not physically protected
9
Michael can bypass the application to gain access to data because administrative tools or administrative interfaces are not secured adequately
10
Spyros can circumvent the application's controls because code frameworks, libraries and components contain malicious code or vulnerabilities (e.g. in-house, commercial off the shelf, outsourced, open source, externally-located)
J
Roman can exploit the application because it was compiled using out-of-date tools, or its configuration is not secure by default, or security information was not documented and passed on to operational teams
Q
Jim can undertake malicious, non-normal, actions without real-time detection and response by the application
K
Grant can utilize the application to deny service to some or all of its users
A
You have invented a new attack of any type
Read more about application security in OWASP's free Guides on Requirements, Development, Code Review and Testing, the Cheat Sheet series, and the Open Software Assurance Maturity Model
A
Alice can utilize the application to attack users' systems and data
Have you thought about becoming an individual OWASP member? All tools, guidance and local meetings are free for everyone, but individual membership helps support OWASP's work
B
Bob can influence, alter or affect the application so that it no longer complies with legal, regulatory, contractual or other organizational mandates
2
Andrew can expose sensitive data through the app's auto-generated screenshots when the app moves to the background
3
Harold can spy sensitive data being entered through the user interface because the data is excessive, not properly masked or cleaned up after use
4
Kelly can expose sensitive data by taking advantage of the app's excessive permissions connected to the app's use of location, camera, microphone, storage, etc
5
Jason can provoke memory leak or corruption because the app has cyclic dependencies, manages pointers inadequately, keeps an incorrect reference count, does not release shared resources or apply stack protection
6
Dawn can expose and intercept sensitive functionality through interprocess communication because permissions for broadcast and sharing are not set, not narrow enough or because sensitive functionality isn't appropriately excluded when sharing
7
Lauren can traverse or modify otherwise protected files through access to the underlying file system by exploiting weaknesses in file system-based content providers, resolvers or its configuration
8
Colin can expose sensitive data through the app's interprocess communication because the content provider's query methods are not properly parameterized and arguments sanitized
9
Toby can modify or expose data by injection because the response from implicit intents is not properly validated
10
Max can modify or expose data because input validation and sanitation are not properly applied to interprocess communication or because extensions are not properly restricted
J
Johan can modify or expose sensitive data by exploiting weaknesses in the SDK or third party libraries because updates to the app and platform are not enforced or do not patch known software vulnerabilities
Q
Xavier can inject scripts into the web view because it allows embedding content using deep linking without proper authorization and validation of the host, schema and path of the target as these can be changed by the user or because safe browsing is disabled
K
Grant can modify or expose data by influencing or altering JavaScript bridges, extensions or interprocess communication (e.g. shared memory, message passing, pipes, sockets)
A
You have invented a new attack against “Platform and Code”
Read more about this topic in OWASP's free Cheat Sheets on Mobile Application Security, and “Mobile App Code Quality” in the “Mobile Application Security Testing Guide” on the OWASP MAS website
2
Matt can inspect sensitive application log data because logging statements have not been removed or reviewed as safe before the production release
3
Bil can access sensitive data for sensitive fields from the pasteboard/clipboard or keyboard cache because the pasteboard/clipboard is not timely cleared, disabled or restricted for sensitive fields, or the keyboard cache is not disabled
4
Ricardo can extract data stored by the app on a stolen or decommissioned device because it does not enforce device access security policies (e.g. PIN protected locking, app-/os-version, USB debug deactivation, device encryption and rooting)
5
Kevin can read sensitive data mapped to user accounts or sessions by extracting data sent through third-party libraries and/or notifications sent between the app and embedded services (e.g. logs, notifications, backups, cache, local db)
6
Sam can dump sensitive data from memory because the data is not stored as primitive data types and overwritten with random data after use or because the app's input fields use insecure SDKs to store the data in RAM
7
Steve can access sensitive data by reading backups and/or local, internal/external storage
8
Martin can modify or expose sensitive data through unsafe reflection when reading data from public data storage (e.g. shared preferences) because the data is not validated before being read by the app
9
Adrian can compromise the app communication through a proxy because the app does not make use of certificate pinning or implements it incorrectly
10
Maarten can compromise the communication between the app and the external services because the app does not verify TLS certificates and -chains, trust insecure sources, lack hostname verification or ignore TLS verification issues
J
Nihel can compromise the communication as it may fall back to an insecure or unencrypted channel, because encryption is optional, or because of client-server protocol or security provider weaknesses
Q
Ahmed can read and modify data in transit because the communication is transmitted over an unencrypted channel
K
Taher can intercept, extract or modify sensitive data at rest or in transit by influencing or altering methods for transferring or storing data at rest or in transit
A
You have invented a new attack against “Network & Storage”
Read more about this topic in OWASP's free Cheat Sheets on Mobile Application Security, and “Mobile App Network Communication” in the “Mobile Application Security Testing Guide” on the OWASP MAS website
2
Sebastien can disclose sensitive data because the application is set up to log debug information at runtime
3
Tobias can disclose sensitive data by dumping debug symbols while the application is running
4
Timur can change the code of the production release because the code of the application has not been properly signed using a valid production certificate
5
Matteo can bypass access controls and trigger functionality because debugging is left enabled in the production build
6
Joren can bypass access controls because the anti-debugging controls aren't strong enough according to what is recommended or the perceived effort of a potential attacker
7
Erlend can compromise the app by running it in an emulator because the prevention against emulators are not strong enough according to what is recommended or the perceived effort of a potential attacker
8
Carlos can reverse engineer the app because the anti-reverse engineering controls aren't strong enough according to what is recommended or the perceived effort of a potential attacker
9
Sean can reverse engineer the app because the code obfuscation isn't strong enough according to what is recommended or the perceived effort of a potential attacker
10
Juan can bypass jailbreak and root detection and execute administrative functions to bypass integrity checks and access controls and trigger app functionality
J
Pekka can compromise the integrity of the storage because the file integrity checks aren't strong enough according to what is recommended or the perceived effort of a potential attacker
Q
Titus can patch out critical functionality because the runtime integrity checks are not strong enough according to what is recommended or the perceived effort of a potential attacker
K
Sherif can influence or alter controls against reverse engineering and runtime protection and can therefore bypass them
A
You have invented a new attack against “Resilience”
Read more about this topic in OWASP's free Cheat Sheets on Mobile Application Security, and “Mobile App Tampering and Reverse Engineering” in the “Mobile Application Security Testing Guide” on the OWASP MAS website
2
Lesego can compromise cryptographic operations and resources because keys are reused for multiple purposes, or not used according to the purpose for which they were created
3
Emery can access data because it has been obfuscated rather than using an approved cryptographic function
4
Enselme can modify sensitive data (stored or in transit) because it is not subject to integrity checking
5
Orace can predict the seed value used for generating cryptographic keys thereby compromising the cryptographic key
6
Kouti can extract sensitive data because the cryptographic key, used, is hard-coded or stored insecurely such as in local, internal/external storage
7
Ramsey can access stored sensitive data because it is not securely encrypted
8
Adel can predict and use the app's cryptographic keys because they are insufficiently long and random, can be enumerated, or derived from known values
9
Fady can bypass cryptographic controls because they do not fail securely (i.e. they default to unprotected)
10
Ash can break the cryptography because it is not strong enough according to what is recommended or the perceived effort of a potential attacker
J
Hassan can extract or modify sensitive data because functions for storage and/or encryption are weak, deprecated or used incorrectly
Q
Simon can bypass hashing and encryption functions because they are custom and/or inadequately implemented
K
Tarik can influence or alter cryptographic operations and can therefore bypass them
A
You have invented a new attack against “Cryptography”
Read more about this topic in OWASP's free Cheat Sheets on Mobile Application Security, and “Mobile App Cryptography” in the “Mobile Application Security Testing Guide” on the OWASP MAS website
2
Garth can reduce app users' privacy because the app is not transparent about the app's data collection and usage in a concise, easily accessible and understandable way
3
Elsa can reduce app users' privacy because the app does not allow for the user to easily manage, delete and modify their data, change privacy settings and re-prompt for consent when more data is required
4
Elizabeth can reduce app users' privacy because the app sends too much personal data without the user's consent to downstream services that are outside the user's control
5
Debarghaya can reduce app users' privacy because the app repurpose personal information (e.g. device IDs, IP addresses, behavioral patterns) collected for security concerns in order to cater for commercial interests without consent
6
Kim can reduce app users' privacy because the app repurpose biometric information (e.g. fingerprints, facial recognition data, etc.) collected for security concerns in order to cater for commercial interests
7
Gastón can execute malicious actions through intent redirection because the intent is not properly sanitized and immutable
8
Roxana can do arbitrary file overwrites and potentially execute malicious code through path traversal because the target path and directory is not appropriately validated
9
Alessandro can exploit the app by taking advantage of buffer overflows and memory leaks to write foreign code within the mobile code's address space
10
Carlos can use the application's notification services to launch phishing campaigns because notifications are not sanitized and validated according to best practices
J
Luis can influence or alter cryptographic methods to corrupt other users' data because the integrity of the encrypted data is not verified before being shared with external services
Q
Victor can patch the app and use it to distribute malicious code because the runtime integrity checks are not strong enough according to what is recommended or the perceived effort of a potential attacker
K
Ruben can use the app, without modifications, to spread malicious code because methods for transfer and storage do not perform proper data sanitization and validation
A
You have invented a new attack of any type
Read more about this topic in OWASP's free Cheat Sheets on Mobile Application Security, and “Mobile App User Privacy Protection” in the “Mobile Application Security Testing Guide” on the OWASP MAS website
A
Starr can influence, alter or affect the app so that it no longer complies with legal, regulatory, contractual or other mandates
Have you thought about becoming an individual OWASP member? All tools, guidance and local meetings are free for everyone, but individual membership helps support OWASP's work
B
Mallory can use the app installed on Bob's device maliciously to surveil, spy on, eavesdrop, control remotely, track or otherwise monitor Bob, without consent and/or notification
Model Risk
Catastrophic forgetting
[ eval:5:catastrophic forgetting ]
When a model is filled with too much overlapping information, collisions in the representation space may lead to the model “forgetting” information.
Model Risk
Oscillation
[ alg:8:oscillation ]
An ML system may end up oscillating and not properly converging if using gradient descent in a space with a misleading gradient.
Model Risk
Randomness
[ alg:4:randomness ]
Setting weights and thresholds with a bad RNG can damage system behavior and lead to subtle security issues.
Model Risk
Online system manipulation
[ alg:1:online ]
When an ML system system online keeps learning during operations, clever attackers can nudge the model so that it drifts from its intended operational profile.
Model Risk
Overfitting
[ eval:1:overfitting ]
The model learns its training dataset so well that it's no longer able to generalize outside of the training set and will perform poorly.
Model Risk
Hyperparameters
[ inference:3:hyperparameters ]
An attacker that can control the hyperparameters can manipulate the future training of the machine learning model
Model Risk
Hosting
[ nference:4: hosting ]
The server where the model is hosted is insufficiently protected against unauthorized parties.
Model Risk
Hyperparameter sensitivity
[ alg:10:hyperparameter sensitivity ]
Sensitive hyperparameters that have been set experimentally may not be sufficient for the intended problem space, and can lead to overfitting.
Model Risk
Model theft
[ model:5:steal the box ]
Stealing ML system knowledge is possible through direct input/output observation, enabling attackers to reverse engineer the model.
Model Risk
Training set reveal
[ model:4:training set reveal ]
Most ML algorithms learn a great deal about its data and store a representation internally. This data may be sensitive, and can potentially be extracted from the model.
Model Risk
Trojanized model
[ model:2:Trojan ]
Model transfer leads to the possibility that what is being reused may be a Trojaned (or otherwise damaged) version of the model.
Model Risk
Improper re-use of model
[ model:1:improper re-use ]
ML models are re-used in transfer situations, where a pre-trained model is specialized toward a new use case. The model may be transferred into a problem space it's not designed for.
Model Risk
You have invented your own risk associated with machine learning models.
Input Risk
LLM feedback scores
[ LLM:inference:6:feedback scores ]
Some LLM chat systems allow user feedback as a parameter for tuning their system. This can be abused by attackers that give feedback in a coordinated fashion to nudge the ML system.
Input Risk
Open to the public
[ LLM:input:3:open to the public ]
An LLM model is often open to the public, which makes it susceptible to attacks from users.
Input Risk
Sponge input
[ LLM:input:5:sponge input ]
A sponge attack provides an LLM system with input that is more costly to process than “normal”. Like a Dos attack, as it seeks to exhaust processing budget.
Input Risk
Input ambiguity
[ LLM:input:6:input ambiguity ]
English, the main interface language for LLMs, is an ambiguous interface. Natural language can be misleading, making LLMs susceptible to misinformation.
Input Risk
Text encoding
[ raw:7:text encoding ]
An ML system engineered with one text encoding scheme in mind might yield surprising results if presented with a differently encoded text.
Input Risk
Denial of service
[ system:10:denial of service ]
Denial of Service attacks can have a massive impact on a critical ML system. When an ML system breaks down, recovery may not be possible.
Input Risk
User risk
[ inference:5:user risk ]
A user may expose their personal data and their interests to the owners of an ML system when they interact with the system.
Input Risk
Dirty input
[ input:3:dirty input ]
Dirty inputs can be hard to process, and may be leveraged by an attacker adding noise in their prompts or in data sources for future training.
Input Risk
Controlled input stream
[ input:2:controlled input stream ]
Outside sources of input may be manipulated by an attacker.
Input Risk
Looped input
[ input:4:looped input ]
ML system output to the real world may feed back into training data or input, leading to a feedback loop, termed recursive pollution.
Input Risk
Prompt injection
[ LLM:input:2:prompt injection ]
Input manipulation for LLMs. An attacker manipulates a large language model (LLM} through malicious inputs to override initial instructions given in system prompts.
Input Risk
Malicious input
[ input:1:adversarial examples ]
Fool a machine learning system by providing malicious input that causes the ML system to make a false prediction or categorization.
Input Risk
You have invented your own risk associated with machine learning input.
Output Risk
Cry wolf
[ system:6:cry wolf ]
If an ML model is integrated into a security decision and raises too many alarms, its output may be ignored .
Output Risk
Black box discrimination
[ system:1:black box discrimination ]
ML systems that operate with high impact decisions based on personal data carry the risk of illegal discrimination based on bias .
Output Risk
LLM overreliance
[ OWASP LLM09 ]
Dependence on an LLM without oversight may lead to misinformation and legal concerns. It will also be hard to detect an attack against the LLM system .
Output Risk
Inscrutability
[ output:4:inscrutability ]
In far too many cases with ML, nobody is really sure how the trained systems do what they do. This negatively affects trustworthiness .
Output Risk
Miscategorization
[ output:3:miscategorization ]
Bad output due to internal bias, malicious input or other attacks may escape into the world .
Output Risk
Transparency
[ output:5:transparency ]
It is easier to perform attacks undetected on a black-box system which is not transparent about how it works .
Output Risk
Confidence scores
[ inference:3:confidence scores ]
An ML model's confidence scores can help an attacker tweak inputs to make the system misbehave .
Output Risk
Wrongness
[ LLM:output:2:wrongness ]
LLMs are stochastic in their nature, and can generate highly convincing misinformation in their attempt to satisfy the prediction of the next tokens from a prompt.
Output Risk
Excessive LLM agency
[ OWASP LLM0S ]
An LLM-based system may undertake actions leading to unintended consequences if granted excessive functionality, permissions, or autonomy .
Output Risk
Overconfidence
[ system:2:overconfidence ]
An ML model integrated into a system with its output treated as high confidence data may cause a range of unexpected issues .
Output Risk
Error propagation
[ system:5:error propagation ]
When ML output is input to a larger decision process, errors in the ML subsystem may propagate in unforeseen ways .
Output Risk
Output manipulation
[ output:1:d i rect ]
An attacker directly manipulates the output stream getting between the ML system and its receiver. This may be hard to detect because models are sometimes opaque .
Output Risk
You have invented your own risk associated with machine learning output .
Dataset Risk
Metadata
[ raw:10:metadata ]
Metadata may accidentally degrade generalization since a model learns a feature of the meta data instead of the content itself.
Dataset Risk
Data rights
[ LLM:raw:4:data rights ]
Copyrighted, privacy protected or otherwise legally encumbered data are scraped from the internet to train ML models. This can lead to expensive legal entanglements.
Dataset Risk
Partitioning
[ assembly:4:partitioning ]
Bad data partitions for training, validation and testing datasets may lead to a misbehaving ML system.
Dataset Risk
Normalization
[ assembly:3:normalize ]
Normalization changes the nature of raw data, and may destroy the feature of interest by introducing too much bias.
Dataset Risk
Annotation
[ assembly:2:annotation ]
The way data is annotated into features can be directly attacked, introducing attacker bias into a system.
Dataset Risk
Encoding integrity
[ assembly:1:encoding integrity ]
Pre-processing and encoding of the data can lead to encoding integrity issues if the data has bias or discrimination in its nature.
Dataset Risk
Bad evaluation data
[ eval:2:bad eval data ]
A bad evaluation dataset can give unrealistic projections to how the model will perform when it is shipped to production.
Dataset Risk
Storage
[ data:4:storage ]
Data may be stored and managed insecurely. Who has access to the data, and why?
Dataset Risk
Recursive pollution
[ LLM:raw:1:recursive pollution] ]
An ML model (LLM or other) generates incorrect content that content finds its way into future training data, which can damage the accuracy and reliability of the model.
Dataset Risk
Data integrity
[ system:2:data integrity ]
If distributed datasets do not have proper integrity checks in place, data can be tampered with undetected as it passes between components.
Dataset Risk
Data confidentiality
[ raw:1:data confidentiality ]
Sensitive and confidential data that is used for ML training can be disclosed with extraction attacks.
Dataset Risk
Data poisoning
[ data:1:poisoning ]
An attacker intentionally manipulates data to disrupt, introduce bias, control or otherwise influence ML training. On the internet, lots of data are already poisoned “by default”.
Dataset Risk
You have invented your own risk associated with machine learning datasets.