How we secure AI deployments — the full picture.
This page is written for security teams, procurement officers, and anyone who needs to understand exactly what controls are in place and which standards they map to.
Data Sovereignty & Residency
What we do
All data — at rest, in transit, and during processing — remains within Australian-hosted infrastructure. This applies to training data, user inputs, model outputs, logs, and any intermediate processing artefacts.
We architect solutions on Australian cloud regions (AWS Sydney, Azure Australia East/Southeast, GCP Sydney) and explicitly prohibit data routing through offshore nodes.
Where AI models require API calls, we ensure those endpoints resolve within Australian boundaries or deploy models locally within your environment.
Why it matters
Government agencies operating under the PSPF must ensure OFFICIAL and PROTECTED information is stored and processed in Australia by appropriately cleared personnel. Private sector organisations subject to the Australian Privacy Act face obligations around cross-border disclosure of personal information under APP 8.
Standards alignment
Essential Eight Alignment
What we do
Every deployment environment is hardened in line with the ASD Essential Eight mitigation strategies at the maturity level your organisation is targeting (typically Maturity Level 2 or 3 for government).
Application control — Only approved applications and scripts can execute. We whitelist inference engines, orchestration tools, and supporting services.
Patch applications and operating systems — Automated patching pipelines for the AI stack including OS, container runtime, ML frameworks, and dependent libraries.
Configure Microsoft Office macro settings — Where Office integration exists, macros are disabled or restricted to signed, vetted code only.
User application hardening — Browsers, PDF readers, and user-facing components are configured to block ads, Java, Flash, and unnecessary web content.
Restrict administrative privileges — Dedicated admin accounts, just-in-time elevation, no shared credentials, no standing privileges.
Multi-factor authentication — All access to AI management consoles, model registries, data stores, and deployment pipelines requires MFA. No exceptions.
Regular backups — Model artefacts, configuration, and system state are backed up with tested recovery procedures and immutable retention.
Why it matters
The Essential Eight is the ASD’s foundational set of mitigation strategies for reducing the risk of cyber incidents. Government agencies are expected to achieve target maturity levels, and a poorly hardened AI deployment can undermine an otherwise strong security posture.
Standards alignment
IRAP-Ready Architecture
What we do
We structure AI deployments so they support — rather than complicate — your IRAP assessment. ISM controls are baked into the architecture from the start.
Network segmentation — AI workloads are isolated in dedicated network zones with strict ingress/egress rules. Model serving, data processing, and management interfaces sit in separate segments.
Cryptography — All data in transit uses TLS 1.2+ (TLS 1.3 preferred). Data at rest uses AES-256 encryption with customer-managed keys where required. Key rotation is automated.
Access control — Role-based access control (RBAC) with the principle of least privilege. Access reviews are scheduled and documented. Privileged access is logged and alertable.
System hardening — Base images are CIS-benchmarked. Unnecessary services are removed. Default credentials are eliminated. Container images are scanned before deployment.
Audit logging — All system events, data access, model invocations, and administrative actions generate immutable audit logs shipped to a centralised SIEM.
Why it matters
Agencies pursuing or maintaining IRAP assessment at PROTECTED level need their AI systems to be covered by the same control framework as the rest of their ICT environment. A poorly architected AI deployment can create gaps that jeopardise an otherwise solid assessment.
Standards alignment
Privacy by Design
What we do
Data minimisation — AI models receive only the data fields required for their specific function. We strip, mask, or tokenise personal identifiers before they enter the processing pipeline wherever possible.
Purpose limitation — Data used for one function cannot be repurposed for another without explicit authorisation. This is enforced through access controls and pipeline design, not just documentation.
Consent management — Where AI processes data requiring consent, we integrate with your existing consent management systems and ensure the AI respects opt-out and withdrawal signals.
Retention controls — Processed data, model inputs, and outputs follow defined retention schedules. We automate deletion at the end of retention periods and log when it happens.
Privacy impact support — We provide the technical inputs your privacy team needs to complete Privacy Impact Assessments (PIAs) for AI-related processing activities.
Why it matters
AI systems often process personal or sensitive information at scale. Privacy controls must be embedded directly into architecture rather than relying on policy alone. Failure to do so risks breaches of the Australian Privacy Principles and erosion of public trust.
Standards alignment
AI-Specific Safety Controls
What we do
Prompt injection defence — Input validation, prompt sandboxing, and output filtering to prevent adversarial inputs from manipulating model behaviour. System prompts are isolated from user inputs.
Output guardrails — Models are constrained to their intended domain. Output classifiers, content filters, and boundary checks prevent hallucinated, harmful, or out-of-scope responses.
Model access control — Tiered access so sensitive functions (data retrieval, system actions) are restricted to authorised roles.
Red-teaming and adversarial testing — Structured adversarial testing before go-live to identify jailbreaks, data leakage, bias, and unintended behaviours.
Model supply chain security — We verify the provenance of pre-trained models and third-party components. Models are scanned for known vulnerabilities and backdoors.
Rate limiting and abuse prevention — API endpoints are rate-limited and monitored for anomalous usage patterns that could indicate automated abuse or extraction attempts.
Why it matters
Traditional cyber security frameworks cover infrastructure and applications well, but AI introduces unique attack surfaces. Prompt injection, training data poisoning, model denial of service, and sensitive information disclosure all require dedicated controls that sit on top of the standard security stack.
Standards alignment
Continuous Monitoring & Assurance
What we do
Structured logging — Every model invocation, data access event, administrative action, and system change generates a structured log entry shipped to your SIEM in real time.
Real-time monitoring — Alerts for anomalous model behaviour, unexpected data access patterns, authentication failures, and configuration changes. Your security operations team gets visibility from day one.
Drift detection — Infrastructure-as-code and policy-as-code baselines are monitored continuously. If a configuration deviates from the approved baseline, an alert fires and the change is flagged.
Compliance dashboards — Dashboards mapping your AI deployment’s current state against the relevant control framework (ISM, Essential Eight, or your internal standard).
Incident response integration — AI-related security events feed into your existing incident response process. We document AI-specific response procedures (model isolation, rollback to known-good state).
Why it matters
A system that was secure at deployment can drift out of compliance over time. Ongoing assurance is essential for maintaining your security posture and satisfying assessors at any point, not just at launch.
Standards alignment
Frequently asked questions
Can your AI solutions handle PROTECTED-level information?
Yes. We design architectures that meet ISM controls at PROTECTED. The system is built to support your IRAP assessment — we provide the documentation and control evidence your assessor needs.
Do you use offshore AI models or APIs?
No. All processing stays within Australian infrastructure. Where we use foundation models, they are deployed on Australian-hosted endpoints or run locally within your environment.
How do you prevent the AI from leaking sensitive data?
Through a combination of data minimisation (the AI only sees what it needs), output filtering (responses are checked before delivery), access controls (different users get different capabilities), and continuous monitoring (anomalous behaviour triggers alerts).
What happens if a vulnerability is found in the AI system?
Our deployments include automated patching for known vulnerabilities in ML frameworks and dependencies, plus a documented process for model-level issues (isolation, rollback, re-testing). AI security events feed into your standard incident response process.
Do you support ongoing compliance, or just initial deployment?
Both. Every deployment includes monitoring, logging, and drift detection. We also provide ongoing assurance engagements to support continuous compliance as standards evolve.
Have a question about securing your AI deployment?
We're happy to walk through how these controls apply to your specific environment. No pitch, no pressure.
Get in Touch →