Azure AI Content Safety is a content moderation platform that uses AI to keep organizational content safe. It is used to create safer online experiences with AI models that detect offensive or inappropriate content in text and images.
N/A
ServiceNow Governance, Risk, and Compliance
Score 9.0 out of 10
N/A
ServiceNow Governance, Risk, and Compliance provides the tools businesses use to proactively manage risk by measuring, testing and auditing internal processes. This solution helps business users ensure compliance to regulations, policies, standards and frameworks. It is available via the Standard, Professional, and Enterprise editions, the latter two supporting GRC and internal auditing processes.
N/A
Pricing
Azure AI Content Safety
ServiceNow Governance, Risk, and Compliance
Editions & Modules
No answers on this topic
No answers on this topic
Offerings
Pricing Offerings
Azure AI Content Safety
ServiceNow Governance, Risk, and Compliance
Free Trial
No
No
Free/Freemium Version
No
No
Premium Consulting/Integration Services
No
No
Entry-level Setup Fee
No setup fee
No setup fee
Additional Details
—
—
More Pricing Information
Community Pulse
Azure AI Content Safety
ServiceNow Governance, Risk, and Compliance
Features
Azure AI Content Safety
ServiceNow Governance, Risk, and Compliance
Governance, Risk & Compliance
Comparison of Governance, Risk & Compliance features of Product A and Product B
Azure AI Content Safety
-
Ratings
ServiceNow Governance, Risk, and Compliance
8.5
10 Ratings
12% above category average
Common repository of GRC items
00 Ratings
8.610 Ratings
Risk management
00 Ratings
9.010 Ratings
Integration with Corporate Performance Management (CPM) systems
Azure AI Content Safety is used for our comment analysis red flag comment process to separate most vulgar, unparlimentary words used in comments and comments which is not safe. Also, the comment contains sexuality, harassment, terrorism, and unwanted content, and things to protect and store in our PostgreSQL pgAdmin database and send as a report to clients for the product safety, and not to bring it to the frontend.
Oracle EBS R12 requires a unique user skillset to understand how it handles user access and functions. Accordingly, ServiceNow has this high level of sophistication to manage this information and apply it to Sensitive Access and Segregation of Duties rules to identify exceptions. This depth of configuration is critical to accurately identify when Oracle Responsibilities (access) truly allows access and thus could be a violation. ERPs with less complexity may not require this customization of ServiceNow GRC, but you would be wise to raise these questions and examples in the demo to ensure it will work for you. In the past, we have found that risks of under-reporting exceptions or false positives become so voluminous that users don't always get to the accurate violations for timely remediation. Proper configuration up front will improve your effectiveness and ROI down the road.
Finding reported by the auditor. GRC helps us identify, assign, and track the resolution of this.
Exception to information security policy. These require quarterly reviews and setting up reminders to revisit these.
Building out new projects and baking security and compliance into the project and tracking it in GRC to ensure we deliver a compliant product on day one
Delivering more out of the box functionality that rivals other GRC platforms. The bare bones approach may not help companies that do not have expertise or capabilities to build effective GRC processes.
Easier way to implement workflow.
Offering better metrics without buying add-on tools.
I have given 10 rating to Azure AI Content Safety because it meets our requirements and actually what we needed during that time to prevent the comments which contains vulgar content. No other model works good properly compared to the Azure AI Content Safety model. We have 20 lakhs comments other models didn't work for even half of it but 75% covered by Azure AI Content Safety.
I'm satisfied with our experience. The configuration was the biggest challenge, but we have moved onto the stage of user training and usability. We would appreciate having better user training documentation and possibly videos and/or computer-based training to help our international users adopt this software for their GRC needs.
It's a good system, but I am awaiting key features in the new release. We hear that ServiceNow is continually adding new features and we look for improved reporting, better Oracle Integration, and user training opportunities. To the extent these materialize, we expect further improvements in our experience with ServiceNow GRC. Until that time, though, we believe we are meeting our objectives expected at the beginning of this project.
Azure OpenAI Service based models like Azure OpenAI 4o models, Azure OpenAI 4o mini models and even omni models don't work properly compared to Azure OpenAI Content safety model for comment analysis of unparlimentary words protection. Also, ChatGPT OpenAI models also did the same, and the accuracy was low and errors were created. So Azure AI Content Safety stacks up against these two and so that we choose Azure AI Content Safety over others.
We just recently started using TrustArc for data privacy requests and I can already speak to the fact that TrustArc is a more confusing platform once there. The positives of ServiceNow would be that a majority of our URL's drive to owned websites which our employees are very comfortable with using versus pushing them to another website that feels unsafe.
One time it cost 7000 US Dollars when we ran to test, which is very costly, then we connected with the Azure team and sorted it out and got back 4000 US Dollars. This should not happen
Azure AI Content Safety helped to find red flagged comments for lakhs of comments for our analysis
Azure AI Content Safety prevents those comments in our product frontend and store that as report helps us a lot.