You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
Is your feature request related to a problem? Please describe
Generative AI models and associated data can potentially contain sensitive information, personally identifiable information (PII), or other regulated data. Without proper controls in place, there is a risk of inadvertent disclosure or leakage of this sensitive data, which can lead to compliance violations and legal issues.
Describe the solution you'd like
Develop an Amazon Comprehend solution to automatically detect and redact PII and other sensitive information from the data ingestion pipeline and model outputs in the Bedrock environment. This solution should leverage Amazon Comprehend's natural language processing capabilities to identify and mask sensitive data before it is processed or generated by the AI models.
Describe alternatives you've considered
An alternative approach is to manually review and sanitize data before ingesting it into the Bedrock environment. However, this is a time-consuming and error-prone process, especially with large volumes of data.
Additional context
The solution should incorporate appropriate security controls, such as encryption at rest and in transit, secure networking, and access controls. It should also include mechanisms to detect and redact PII and other sensitive information from the Bedrock data pipeline.
The text was updated successfully, but these errors were encountered:
Community Note
Is your feature request related to a problem? Please describe
Generative AI models and associated data can potentially contain sensitive information, personally identifiable information (PII), or other regulated data. Without proper controls in place, there is a risk of inadvertent disclosure or leakage of this sensitive data, which can lead to compliance violations and legal issues.
Describe the solution you'd like
Develop an Amazon Comprehend solution to automatically detect and redact PII and other sensitive information from the data ingestion pipeline and model outputs in the Bedrock environment. This solution should leverage Amazon Comprehend's natural language processing capabilities to identify and mask sensitive data before it is processed or generated by the AI models.
Describe alternatives you've considered
An alternative approach is to manually review and sanitize data before ingesting it into the Bedrock environment. However, this is a time-consuming and error-prone process, especially with large volumes of data.
Additional context
The solution should incorporate appropriate security controls, such as encryption at rest and in transit, secure networking, and access controls. It should also include mechanisms to detect and redact PII and other sensitive information from the Bedrock data pipeline.
The text was updated successfully, but these errors were encountered: