
(Source: VideoFlow/Shutterstock)
Artificial intelligence (AI) is being rapidly integrated into public sector operations. In 2024 alone, federal agencies reported more than 1,700 AI use cases, more than double the number from the prior year. With half of these concentrated in departments managing sensitive national missions such as healthcare, veteran services and homeland security, the need to secure AI systems in government are both urgent and complex. Success relies on an end-to-end approach to address risks, maintain compliance and build systems that are both explainable and resilient.
Prioritizing Trust and Accountability
One of the foundational challenges of securing AI in the public sector is navigating the evolving landscape of regulations and governance. Even in the absence of any sweeping new federal AI legislation, existing data protection laws and sector-specific rules already inform how AI must be governed. Agencies must ensure their AI systems align with standards for responsible and ethical use, including those related to privacy, transparency, bias and oversight.
Regulators do not distinguish between errors made by humans or by algorithms; the impacts are judged the same, and the potential costs of noncompliance, especially at scale, can be significant. Against this backdrop, transparency and explainability are essential. Especially in high-risk scenarios, an AI model’s behavior and recommendations can have life or death implications; so it is critical to understand exactly how and why such models are making their decisions.
Responsible AI governance must therefore be rooted in a multidisciplinary framework that incorporates ethical standards, legal compliance, human oversight, and sustainability across the AI lifecycle. Systems must be designed with tools and processes that enable developers, operators and oversight bodies to trace decisions and identify model behavior. The logic behind automated outputs must be clear and reviewable; otherwise, public sector IT teams find it nearly impossible to audit AI-driven decisions, assess fairness or hold systems accountable when failures occur.
Securing the Data Throughout the AI Lifecycle
Data is the foundation of all AI models, and its security across every stage of storage, in transit and use is essential. This is especially critical for public agencies, which often work with highly sensitive data ranging from citizen records to national intelligence. Protecting such data requires layered defenses that address both traditional cybersecurity threats and emerging risks unique to AI.
At the storage level, datasets must be protected against unauthorized access and tampering. And when data is transferred, whether through terrestrial networks or satellite communications, it must be encrypted using modern and preferably quantum-level standards. Once data is in use, establishing secure computing environments can help protect against memory-level attacks, which bypass traditional security measures by operating entirely within the running memory of a process or system. All these layers of protection become even more essential in the age of AI, since AI systems tend to make more connections across datasets in the organization than traditional computer programs.
While AI is driving more sophisticated threats, such as social engineering attacks that employ deepfakes and other synthetic content, the good news is that AI can also power more advanced protections like behavioral analytics and anomaly detection that play a role in combating such risks. At the same time, basic cyber hygiene practices like strong access controls, multi-factor authentication and regular audits also remain essential to public sector cybersecurity.
Securing Operational Efficiency Through Smart Planning
Beyond transparency and cyberprotections, securing public sector AI also involves maintaining basic operational integrity and efficiency, and that means managing costs. The resources required to develop and run sophisticated AI models include energy-intensive computing, large datasets and specialized talent. These advanced requirements can stretch tight government budgets, but smart planning can help manage costs.
For instance, interoperable platforms between agencies for fraud detection or other shared challenges can prevent duplication and foster more efficient use of resources for common problems that plague many areas of government. Another approach is to use retrieval-augmented generation (RAG), data compression algorithms and other advanced techniques to enable the use of smaller models while maintaining high accuracy of the AI platform. This can reduce dependency on large, resource-heavy systems and support more precise, mission-specific applications that align with budget and policy constraints.

(Harsamadu/Shutterstock)
At the infrastructure level, agencies should consider cloud platforms that offer an alternative to expensive on-premises systems by providing scalable compute and storage, enhanced security features and streamlined management. Completing the picture for secure and cost effective public sector AI is smart workforce planning. This includes automating repetitive tasks to free up employees for more strategic responsibilities, and building internal expertise through targeted training programs to reduce dependence on external consultants and better manage projects in-house.
As public sector AI continues to grow in both scale and impact, the choices made today will shape the safety, trust and effectiveness of these systems for years to come. Agencies must understand how AI intersects with their unique business contexts and risks, and they must actively coordinate across teams to ensure alignment with both strategic goals and security requirements. Ultimately, securing AI in these environments requires a proactive, end-to-end approach that embeds security, privacy, fairness and efficiency throughout the AI lifecycle.
About the Author
Burnie Legette has been in the high-tech industry for 20+ years spending 15 of those years in the communications and networking sectors. Burnie officially joined Intel in November 2016 and now serves as the Director of IOT Sales, Artificial Intelligence at Intel. Burnie has played a key role in Intel’s AI Initiative serving as an AI “Core Team” member that has helped define Intel’s Edge AI sales strategy, AI organizational support structure, and internal processes which also includes product development input. Burnie holds a BS in Electrical Engineering from the University of Michigan, an MBA from Babson College (Management/Finance), a certificate in ‘Behavioral Economics’ from the Harvard Business School, and also holds a Master of Divinity (MDIV) from Gordon Conwell Theological Seminary. In addition, Burnie is active contributor to Intel’s commitment to diversity and inclusion by assisting in minority college recruiting and minority retainment initiatives. Burnie is married to his beautiful wife Michelle and they have 3 children. In his free time Burnie enjoys reading, working out, entrepreneurship, and community service.