Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Remaining Windsurf team and tech acquired by Cognition, makers of Devin: ‘We’re friends with Anthropic again’

Following YouTube, Meta announces crackdown on ‘unoriginal’ Facebook content

DeepMind’s Veo3 AI – The New King Is Here!

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Microsoft Research

AI Testing and Evaluation: Learnings from cybersecurity

By Advanced AI EditorJuly 14, 2025No Comments27 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


CIARAN MARTIN: Well, thanks so much for inviting me. It’s great to be here.

SULLIVAN: Ciaran, before we get into some regulatory specifics, it’d be great to hear a little bit more about your origin story, and just take us to that day—who tapped you on the shoulder and said, “Ciaran, we need you to run a national cyber center! Do you fancy building one?”

MARTIN: You could argue that I owe my job to Edward Snowden. Not an obvious thing to say. So the National Cyber Security Centre, which didn’t exist at the time—I was invited to join the British government’s cybersecurity effort in a leadership role—is now a subset of GCHQ. That’s the digital intelligence agency. The equivalent in the US obviously is the NSA [National Security Agency]. It had been convulsed by the Snowden disclosures. It was an unprecedented challenge.

I was a 17-year career government fixer with some national security experience. So I was asked to go out and help with the policy response, the media response, the legal response. But I said, look, any crisis, even one as big as this, is over one way or the other in six months. What should I do long term? And they said, well, we were thinking of asking you to try to help transform our cybersecurity mission. So the National Cyber Security Centre was born, and I was very proud to lead it, and all in all, I did it for seven years from startup to handing it on to somebody else.

SULLIVAN: I mean, it’s incredible. And just building on that, people spend a significant portion of their lives online now with a variety of devices, and maybe for listeners who are newer to cybersecurity, could you give us the 90-second lightning talk? Kind of, what does risk assessment and testing look like in this space?

MARTIN: Well, risk assessment and testing, I think, are two different things. You can’t defend everything. If you defend everything, you’re defending nothing. So broadly speaking, organizations face three threats. One is complete disruption of their systems. So just imagine not being able to access your system. The second is data protection, and that could be sensitive customer information. It could be intellectual property. And the third is, of course, you could be at risk of just straightforward being stolen from. I mean, you don’t want any of them to happen, but you have to have a hierarchy of harm.

SULLIVAN: Yes.

MARTIN: So that’s your risk assessment.

The testing side, I think, is slightly different. One of the paradoxes, I think, of cybersecurity is for such a scientific, data-rich subject, the sort of metrics about what works are very, very hard to come by. So you’ve got boards and corporate leadership and senior governmental structures, and they say, “Look, how do I run this organization safely and securely?” And a cybersecurity chief within the organization will say, “Well, we could get this capability in.” Well, the classic question for a leadership team to ask is, well, what risk and harm will this reduce, by how much, and what’s the cost-benefit analysis? And we find that really hard.

So that’s really where testing and assurance comes in. And also as technology changes so fast, we have to figure out, well, if we’re worried about post-quantum cryptography, for example, what standards does it have to meet? How do you assess whether it’s meeting those standards? So it’s a huge issue in cybersecurity and one that we’re always very conscious of. It’s really hard.

SULLIVAN: Given the scope of cybersecurity, are there any differences in testing, let’s say, for maybe a small business versus a critical infrastructure operator? Are there any, sort of, metrics we can look at in terms of distinguishing risk or assessment?

MARTIN: There have to be. One of the reasons I think why we have to be is that no small business can be expected to take on a hostile nation-state that’s well equipped. You have to be realistic.

If you look at government guidance, certainly in the UK 15 years ago on cybersecurity, you were telling small businesses that are living hand to mouth, week by week, trying to make payments at the end of each month, we were telling them they needed sort of nation-state-level cyber defenses. That was never going to happen, even if they could afford it, which they couldn’t. So you have to have some differentiation. So again, you’ve got assessment frameworks and so forth where you have to meet higher standards. So there absolutely has to be that distinction. Otherwise, you end up in a crazy world of crippling small businesses with just unmanageable requirements which they’re never going to meet.

SULLIVAN: It’s such a great point. You touched on this a little bit earlier, as well, but just cybersecurity governance operates in a fast-moving technology and threat environment. How have testing standards evolved, and where do new technical standards usually originate?

MARTIN: I keep saying this is very difficult, and it is. [LAUGHTER] So I think there are two challenges. One is actually about the balance, and this applies to the technology of today as well as the technology of tomorrow. This is about, how do you make sure things are good enough without crowding out new entrants? You want people to be innovative and dynamic. You want disruptors in this business.

But if you say to them, “Look, well, you have to meet these 14 impossibly high technical standards before you can even sell to anybody or sell to the government,” whatever, then you’ve got a problem. And I think we’ve wrestled with that, and there’s no perfect answer. You just have to try and go to … find the sweet spot between two ends of a spectrum. And that’s going to evolve.

The second point, which in some respects if you’ve got the right capabilities is slightly easier but still a big call, is around, you know, those newer and evolving technologies. And here, having, you know, been a bit sort of gloomy and pessimistic, here I think is actually an opportunity. So one of the things we always say in cybersecurity is that the internet was built and developed without security in mind. And that was kind of true in the ’90s and the noughties, as we call them over here.

But I think as you move into things like post-quantum computing, applied use of AI, and so on, you can actually set the standards at the beginning. And that’s really good because it’s saying to people that these are the things that are going to matter in the post-quantum age. Here’s the outline of the standards you’re going to have to meet; start looking at them. So there’s an opportunity actually to make technology safer by design, by getting ahead of it. And I think that’s the era we’re in now.

SULLIVAN: That makes a lot of sense. Just building on that, do businesses and the public trust these standards? And I guess, which standard do you wish the world would just adopt already, and what’s the real reason they haven’t?

MARTIN: Well, again, where do you start? I mean, most members of the public quite rightly haven’t heard of any of these standards. I think public trust and public capital in any society matters. But I think it is important that these things are credible.

And there’s quite a lot of convergence between, you know, the top-level frameworks. And obviously in the US, you know, the NIST [National Institute of Standards and Technology] framework is the one that’s most popular for cybersecurity, but it bears quite a strong resemblance to the international one, ISO[/IEC] 27001, and there are others, as well. But fundamentally, they boil down to kind of five things. Do a risk assessment; work out what your crown jewels are. Protect your perimeter as best you can. Those are the first two.

The third one then is when your perimeter’s breached, be able to detect it more times than not. And when you can’t do that, you go to the fourth one, which is, can you mitigate it? And when all else fails, how quickly can you recover and manage it? I mean, all the standards are expressed in way more technical language than that, but fundamentally, if everybody adopted those five things and operated them in a simple way, you wouldn’t eliminate the harm, but you would reduce it quite substantially.

SULLIVAN: Which policy initiatives are most promising for incentivizing companies to undertake, you know, these cybersecurity testing parameters that you’ve just outlined? Governments, including the UK, have used carrots and sticks, but what do you think will actually move the needle?

MARTIN: I think there are two answers to that, and it comes back to your split between smaller businesses and critically important businesses. In the critically important services, I think it’s easier because most industries are looking for a level playing field. In other words, they realize there have to be rules and they want to apply them to everyone.

We had a fascinating experience when I was in government back in around 2018 where the telecom sector, they came to us and they said, we’ve got a very good cooperative relationship with the British government, but it needs to be put on a proper legal footing because you’re just asking us nicely to do expensive things. And in a regulated sector, if you actually put in some rules—and please develop them jointly with us; that’s the crucial part—then that will help because it means that we’re not going to our boards and saying, or our shareholders, and saying that we should do this, and they’re saying, “Well, do you have to do it? Are our competitors doing it?” And if the answer to that is, yes, we have to, and, yes, our competitors are doing it, then it tends to be OK.

The harder nut to crack is the smaller business. And I think there’s a real mystery here: why has nobody cracked a really good and easy solution for small business? We need to be careful about this because, you know, you can’t throttle small businesses with onerous regulation. At the same time, we’re not brilliant, I think, in any part of the world at using the normal corporate governance rules to try and get people to figure out how to do cybersecurity.

There are initiatives there that are not the sort of pretty heavy stick that you might have to take to a critical function, but they could help. But that is a hard nut to crack. And I look around the world, and, you know, I think if this was easy, somebody would have figured it out by now. I think most of the developed economies around the world really struggle with cybersecurity for smaller businesses.

SULLIVAN: Yeah, it’s a great point. Actually building on one of the comments you made on the role of, kind of, government, how do you see the role of private-public partnerships scaling and strengthening, you know, robust cybersecurity testing?

MARTIN: I think they’re crucial, but they have to be practical. I’ve got a slight, sort of, high horse on this, if you don’t mind, Kathleen. It’s sort of … [LAUGHS]

SULLIVAN: Of course.

MARTIN: I think that there are two types of public-private partnership. One involves committees saying that we should strengthen partnerships and we should all work together and collaborate and share stuff. And we tried that for a very long time, and it didn’t get us very far. There are other types.

We had some at the National Cyber Security Centre where we paid companies to do spectacularly good technical work that the market wouldn’t provide. So I think it’s sort of partnership with a purpose. I think sometimes, and I understand the human instinct to do this, particularly in governments and big business, they think you need to get around a table and work out some grand strategy to fix everything, and the scale of the … not just the problem but the scale of the whole technology is just too big to do that.

So pick a bit of the problem. Find some ways of doing it. Don’t over-lawyer it. [LAUGHTER] I think sometimes people get very nervous. Oh, well, is this our role? You know, should we be doing this, that, and the other? Well, you know, sometimes certainly in this country, you think, well, who’s actually going to sue you over this, you know? So I wouldn’t over-programmatize it. Just get stuck practically into solving some problems.

SULLIVAN: I love that. Actually, [it] made me think, are there any surprising allies that you’ve gained—you know, maybe someone who you never expected to be a cybersecurity champion—through your work?

MARTIN: Ooh! That’s a … that’s a… what a question! To give you a slightly disappointing answer, but it relates to your previous question. In the early part of my career, I was working in institutions like the UK Treasury long before I was in cybersecurity, and the treasury and the British civil service in general, but the treasury in particular sort of trained you to believe that the private sector was amoral, not immoral, amoral. It just didn’t have values. It just had bottom line, and, you know, its job essentially was to provide employment and revenue then for the government to spend on good things that people cared about. And when I got into cybersecurity and people said, look, you need to develop relations with this cybersecurity company, often in the US, actually. I thought, well, what’s in it for them?

And, sure, sometimes you were paying them for specific services, but other times, there was a real public spiritedness about this. There was a realization that if you tried to delineate public-private boundaries, that it wouldn’t really work. It was a shared risk. And you could analyze where the boundaries fell or you could actually go on and do something about it together. So I was genuinely surprised at the allyship from the cybersecurity sector. Absolutely, I really, really was. And I think it’s a really positive part of certainly the UK cybersecurity ecosystem.

SULLIVAN: Wonderful. Well, we’re coming to the end of our time here, but is there any maybe last thoughts or perhaps requests you have for our listeners today?

MARTIN: I think that standards, assurance, and testing really matter, but it’s a bit like the discussion we’re having over AI. Get all these things to take you 80, 90% of the way and then really apply your judgment. There’s been some bad regulation under the auspices of standards and assurance. First of all, it’s, have you done this assessment? Have you done that? Have you looked at this? Well, fine. And you can tick that box, but what does it actually mean when you do it? What bits that you know in your heart of hearts are really important to the defense of your organization that may not be covered by this and just go and do those anyway. Because sure it helps, but it’s not everything.

SULLIVAN: No. Great, great closing sentiment. Well, Ciaran, thank you for joining us today. This has been just a super fun conversation and really insightful. Just really enjoyed the conversation. Thank you.

MARTIN: My pleasure, Kathleen, thank you.

[TRANSITION MUSIC]

SULLIVAN: Now, I’m happy to introduce Tori Westerhoff. As a principal director on the Microsoft AI Red Team, Tori leads all AI security and safety red team operations, as well as dangerous capability testing, to directly inform C-suite decision-makers.

So, Tori, welcome!

TORI WESTERHOFF: Thanks. I am so excited to be here.

SULLIVAN: I’d love to just start a little bit more learning about your background. You’ve worn some very intriguing hats. I mean, cognitive neuroscience grad from Yale, national security consultant, strategist in augmented and virtual reality … how do those experiences help shape the way you lead the Microsoft AI Red Team?

WESTERHOFF: I always joke this is the only role I think will always combine the entire patchwork LinkedIn résumé. [LAUGHS]

I think I use those experiences to help me understand the really broad approach that AI Red Team—artist also known as AIRT; I’m sure I’ll slip into our acronym—how we frame up the broad security implications of AI. So I think the cognitive neuroscience element really helped me initially approach AI hacking, right. There’s a lot of social engineering and manipulation within chat interfaces that are enabled by AI. And also, kind of, this, like, metaphor for understanding how to find soft spots in the way that you see human heuristics show up, too. And so I think that was actually my personal “in” to getting hooked into AI red teaming generally.

But my experience in national security and I’d also say working through the AR/VR/metaverse space at the time where I was in it helped me balance both how our impact is framed, how we’re thinking about critical industries, how we’re really trying to push our understanding of where security of AI can help people the most. And also do it in a really breakneck speed in an industry that’s evolving all of the time, that’s really pushing you to always be at the bleeding edge of your understanding. So I draw a lot of the energy and the mission criticality and the speed from those experiences as we’re shaping up how we approach it.

SULLIVAN: Can you just give us a quick rundown? What does the Red Team do? What actually, kind of, is involved on a day-to-day basis? And then as we think about, you know, our engagements with large enterprises and companies, how do we work alongside some of those companies in terms of testing?

WESTERHOFF: The way I see our team is almost like an indicator light that works really part and parcel with product development. So the way we’ve organized our expert red teaming efforts is that we work with product development before anything ships out to anyone who can use it. And our job is to act as expert AI manipulators, AI hackers. And we are supposed to take the theories and methods and new research and harness it to find examples of vulnerabilities or soft spots in products to enable product teams to harden those soft spots before anything actually reaches someone who wants to use it.

So if we’re the indicator light, we are also not the full workup, right. I see that as measurement and evals. And we also are not the mechanic, which is that product development team that’s creating mitigations. It’s platform-security folks who are creating mitigations at scale. And there’s a really great throughput of insights from those groups back into our area where we love to inform about them, but we also love to add on to, how do we break the next thing, right? So it’s a continuous cycle.

And part of that is just being really creative and thinking outside of a traditional cybersecurity box. And part of that is also really thinking about how we pull in research—we have a research function within our AI Red Team—and how we automate and scale. This year, we’ve pulled a lot of those assets and insights into the Azure [AI] Foundry AI Red Teaming Agent (opens in new tab). And so folks can now access a lot of our mechanisms through that. So you can get a little taste of what we do day to day in the AI Red Teaming Agent.

SULLIVAN: You recently—actually, with your team—published a report that outlined lessons from testing over a hundred generative AI products. But could you share a bit about what you learned? What were some of the important lessons? Where do you see opportunities to improve the state of red teaming as a method for probing AI safety?

WESTERHOFF: I think the most important takeaway from those lessons is that AI security is truly a team sport. You’ll hear cybersecurity folks say that a lot. And part of the rationale there is that the defense in depth and integrating and a view towards AI security through the entire development of AI systems is really the way that we’re going to approach this with intentionality and responsibility.

So in our space, we really focus on novel harm categories. We are pushing bleeding edge, and we also are pushing iterative and, like, contextually based red teaming in product dev. So outside of those hundred that we’ve done, there’s a community [LAUGHS] through the entire, again, multistage life cycle of a product that is really trying to push the cost of attacking those AI systems higher and higher with all of the expertise they bring. So we may be, like, the experts in AI hacking in that line, but there are also so many partners in the Microsoft ecosystem who are thinking about their market context or they really, really know the people who love their products. How are they using it?

And then when you bubble out, you also have industry and government who are working together to push towards the most secure AI implementation for people, right? And I think our team in particular, we feel really grateful to be part of the big AI safety and security ecosystem at Microsoft and also to be able to contribute to the industry writ large.

SULLIVAN: As you know, we had a chance to speak with Professor Ciaran Martin from the University of Oxford about the cybersecurity industry and governance there. What are some of the ideas and tools from that space that are surfacing in how we think about approaching red teaming and AI governance broadly?

WESTERHOFF: Yeah, I think it’s such a broad set of perspectives to bring in, in the AI instance. Something that I’ve noticed interjecting into security at the AI junction, right, is that cybersecurity has so many decades of experience of working through how to build trustworthy computing, for example, or bring an entire industry to bear in that way. And I think that AI security and safety can learn a lot of lessons of how to bring clarity and transparency across the industry to push universal understanding of where the threats really are.

So frameworks coming out of NIST, coming out of MITRE that help us have a universal language that inform governance, I think, are really important because it brings clarity irrespective of where you are looking into AI security, irrespective of your company size, what you’re working on. It means you all understand, “Hey, we are really worried about this fundamental impact.” And I think cybersecurity has done a really good job of driving towards impact as their organizational vector. And I am starting to see that in the AI space, too, where we’re trying to really clarify terms and threats. And you see it in updates of those frameworks, as well, that I really love.

So I think that the innovation is in transparency to folks who are really innovating and doing the work so we all have a shared language, and from that, it really creates communal goals across security instead of a lot of people being worried about the same thing and talking about it in a different way.

SULLIVAN: Mm-hmm. In the cybersecurity context, Ciaran really stressed matching risk frameworks to an organization’s role and scale. Microsoft plays many roles, including building models and shipping applications. How does your red teaming approach shift across those layers? 

WESTERHOFF: I love this question also because I love it as part of our work. So one of the most fascinating things about working on this team has been the diversity of the technology that we end up red teaming and testing. And it feels like we’re in the crucible in that way. Because we see AI applied to so many different architectures, tech stacks, individual features, models, you name it.

Part of my answer is that we still care about the highest-impact things. And so irrespective of the iteration, which is really fascinating and I love, I still think that our team drives to say, “OK, what is that critical vulnerability that is going to affect people in the largest ways, and can we battle test to see if that can occur?”

So in some ways, the task is always the same. I think in the ways that we change our testing, we customize a lot to the access to systems and data and also people’s trust almost as different variables that could affect the impact, right.

So a good example is if we’re thinking through agentic frameworks that have access to functions and tools and preferential ability to act on data, it’s really different to spaces where that action may not be feasible, right. And so I think the tailoring of the way to get to that impact is hyper-custom every time we start an engagement. And part of it is very thesis driven and almost mechanizing empathy.

You almost need to really focus on how people could use, or misuse, in such a way that you can emulate it before to a really great signal to product development, to say this is truly what people could do and we want to deliver the highest-impact scenarios so you can solve for those and also solve the underlying patterns, actually, that could contribute to maybe that one piece of evidence but also all the related pieces of evidence. So singular drive but like hyper-, hyper-customization to what that piece of tech could do and has access to.

SULLIVAN: What are some of the unexplored testing approaches or considerations from cybersecurity that you think we should encourage AI technologists, policymakers, and other stakeholders to focus on?

WESTERHOFF: I do love that AI humbles us each and every day with new capabilities and the potential for new capabilities. It’s not just saying, “Hey, there’s one test that we want to try,” but more, “Hey, can we create a methodology that we feel really, really solid about so that when we are asked a question we haven’t even thought of, we feel confident that we have the resources and the system?”

So part of me is really intrigued by the process that we’re asked to make without knowing what those capabilities are really going to bring. And then I think tactically, AIRT is really pushing on how we create new research methodologies. How are we investing in, kind of, these longer-term iterations of red teaming? So we’re really excited about pushing out those insights in an experimental and longer-term way.

I think another element is a little bit of that evolution of how industry standards and frameworks are updating to the AI moment and really articulating where AI is either furthering adversarial ability to create those harms or threats or identifying where AI has a net new harm. And I think that demystifies a little bit about what we talked about in terms of the lessons learned, that fundamentally, a lot of the things that we talk about are traditional security vulnerabilities, and we are standing on kind of that cybersecurity shoulder. And I’m starting to see those updates translate in spaces that are already considered trustworthy and kind of the basis on which not only cybersecurity folks build their work but also business decision-makers make decisions on those frameworks.

So to me, integration of AI into those frameworks by those same standards means that we’re evolving security to include AI. We aren’t creating an entirely new industry of AI security and that, I think, really helps anchor people in the really solid foundation that we have in cybersecurity anyways.

I think there’s also some work around how the cyber, like, defenses will actually benefit from AI. So we think a lot about threats because that’s our job. But the other side of cybersecurity is offense. And I’m seeing a ton of people come out with frameworks and methodologies, especially in the research space, on how defensive networks are going to be benefited from things like agentic systems.

Generally speaking, I think the best practice is to realize that we’re fundamentally still talking about the same impacts, and we can use the same avenues, conversations, and frameworks. We just really want them to be crisply updated with that understanding of AI applications.

SULLIVAN: How do you think about bringing others into the fold there? I think those standards and frameworks are often informed by technologists. But I’d love for you to expand [that to] policymakers or other kind of stakeholders in our ecosystem, even, you know, end consumers of these products. Like, how do we communicate some of this to them in a way that resonates and it has an impactful meaning?

WESTERHOFF: I’ve found the AI security-safety space to be one of the more collaborative. I actually think the fact that I’m talking to you today is probably evidence that a ton of people are bringing in perspectives that don’t only come from a long-term cybersecurity view. And I see that as a trend in how AI is being approached opposed to how those areas were moving earlier. So I think that speed and the idea of conversations and not always having the perfect answer but really trying to be transparent with what everyone does know is kind of a communal energy in the communities, at least, where we’re playing. [LAUGHS] So I am pretty biased but at least the spaces where we are.

SULLIVAN: No, I think we’re seeing that across the board. I mean, I’d echo [that] sitting in research, as well, like, that ability to have impact now and at speed to getting the amazing technology and models that we’re creating into the hands of our customers and partners and ecosystem is just underscored.

So on the note of speed, let’s shift gears a little bit to just a quick lightning round. I’d love to get maybe some quick thoughts from you, just 30-second answers here. I’ll start with one.

Which headline-grabbing AI threat do you think is mostly hot air?

WESTERHOFF: I think we should pay attention to it all. I’m a red team lead. I love a good question to see if we can find an answer in real life. So no hot air, just questions.

SULLIVAN: Is there some sort of maybe new tool that you can’t wait to sneak into the red team arsenal?

WESTERHOFF: I think there are really interesting methodologies that break our understanding of cybersecurity by looking at the intersection between different layers of AI and how you can manipulate AI-to-AI interaction, especially now when we’re looking at agentic systems. So I would say a method, not a tool.

SULLIVAN: So maybe ending on a little bit of a lighter note, do you have a go-to snack during an all-night red teaming session?

WESTERHOFF: Always coffee. I would love it to be a protein smoothie, but honestly, it is probably Trader Joe’s elote chips. Like the whole bag. [LAUGHTER] It’s going to get me through. I’m going to not love that I did it.

[MUSIC]

SULLIVAN: Amazing. Well, Tori, thanks so much for joining us today, and just a huge thanks also to Ciaran for his insights, as well.

WESTERHOFF: Thank you so much for having me. This was a joy.

SULLIVAN: And to our listeners, thanks for tuning in. You can find resources related to this podcast in the show notes. And if you want to learn more about how Microsoft approaches AI governance, you can visit microsoft.com/RAI.

See you next time! 

[MUSIC FADES]



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleNew Grok AI model surprises experts by checking Elon Musk’s views before answering
Next Article A Look at the History and Safety Regulations Behind Carbon Pipelines
Advanced AI Editor
  • Website

Related Posts

How AI will accelerate biomedical research and discovery

July 10, 2025

AI Testing and Evaluation: Learnings from pharmaceuticals and medical devices

July 7, 2025

AI Testing and Evaluation: Learnings from genome editing

June 30, 2025

Comments are closed.

Latest Posts

Murujuga Rock Art in Australia Receives UNESCO World Heritage Status

‘Earth Room’ Caretaker Dies at 70

Homeland Security Targets Chicago’s National Museum of Puerto Rican Arts & Culture

1,600-Year-Old Tomb of Mayan City’s Founding King Discovered in Belize

Latest Posts

Remaining Windsurf team and tech acquired by Cognition, makers of Devin: ‘We’re friends with Anthropic again’

July 14, 2025

Following YouTube, Meta announces crackdown on ‘unoriginal’ Facebook content

July 14, 2025

DeepMind’s Veo3 AI – The New King Is Here!

July 14, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Remaining Windsurf team and tech acquired by Cognition, makers of Devin: ‘We’re friends with Anthropic again’
  • Following YouTube, Meta announces crackdown on ‘unoriginal’ Facebook content
  • DeepMind’s Veo3 AI – The New King Is Here!
  • How Startups Can Win Talent War
  • Autonomy, Governance, and the New Risk Equation

Recent Comments

  1. Best SEO Backlinks on Tyler Cowen: Economic Growth & the Fight Against Conformity & Mediocrity | Lex Fridman Podcast #174
  2. binance on Veo 3 demo | Sailor and the sea
  3. Index Home on Nvidia, OpenAI, Oracle back UAE-leg of global Stargate AI mega-project
  4. código de indicac~ao binance on [2505.13511] Can AI Freelancers Compete? Benchmarking Earnings, Reliability, and Task Success at Scale
  5. Compte Binance on Anthropic’s Lawyers Apologize After its Claude AI Hallucinates Legal Citation in Copyright Lawsuit

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.