Combating the Popularity of Gift Card Phishing Attacks
Gift card phishing, though not a new tactic, continues to pose significant threats in the realm of cybersecurity. In this ongoing campaign,...
Artificial Intelligence has shifted from proof-of-concept to production—and now, it's moving from "how could we use it" to "where else can we use it, and how do we do so safely?" As it accelerates business transformation, it also introduces new vectors of vulnerability—many of them hard to detect, or even invisible, to legacy governance frameworks. What concerns us most is that many of these new risks are proving elusive not just technically, but also procedurally—traditional compliance and governance models weren’t built for this pace or complexity.
Yet for many CISOs, the conversation around AI risk is shifting—from "what is it?" to "how could we use it?"—and now to "where else can we use it, and how do we do so safely?" But in that rush forward, the deeper issues—behavior, culture, and accountability—often get sidelined. Events like RSA and Black Hat spotlight the urgency of AI risk, yet many vulnerabilities remain elusive—not because they’re purely technical, but because traditional compliance frameworks weren’t designed for emergent, generative systems.
A Trustmarque survey found that while 93% of organizations use AI, only 7% have fully embedded governance, and merely 8% have integrated AI governance into the development lifecycle. (itpro.com)
Meanwhile, a Gartner study of 200 IT and Data leaders revealed just 12% of enterprises have a dedicated AI governance framework, while 55% haven’t implemented one yet. (atlan.com)
And Forrester forecasts a massive ramp-up in investment: spending on off-the-shelf AI governance software is expected to more than quadruple—reaching $15.8 billion by 2030, guided by a compound annual growth rate of ~30%. (forrester.com)
This “we’ll catch up later” mindset is a latent risk bomb inside many enterprises. If governance is left as an afterthought, speed becomes your weak point—not your advantage.
As one of the most consequential forces in modern enterprise, AI isn’t just another compliance topic. It’s a systemic shift. The stakes are high: reputational risk, legal exposure, IP leakage, operational disruption—and above all, a breakdown in human trust. So how do we govern AI wisely, before it governs us by default?
Culture is complex. But that doesn’t mean it’s intangible. Culture can be understood, mapped, measured, and shaped. And when it comes to AI, your culture might just be your most predictive indicator of risk. It’s not a crystal ball—but it's as close to left of boom as most leaders can hope to get.
Here are ten hard—but essential—questions every CISO should be asking right now:
Most AI risk emerges from shadow use. Do you have visibility into which teams are deploying AI tools? Are they consumer-grade platforms or internal models? What’s being inputted, where does it go, and how is it stored?
Right now, many teams are checking at a technical level—auditing tools in use, scanning inputs, and blocking sensitive content. That's a great start. But what most organizations are missing is the final signal: the human one. Are you actively involving your people in identifying what they’re using, why, and how they perceive AI usage norms? Because those perceptions shape behaviors—and behaviors shape risk. If you want a true picture of shadow AI, you need both system visibility and a people-driven lens. Ask the humans. Measure the norm. Then build the change from there.
You may have written an AI Acceptable Use Policy. But does it reflect the lived experience of your employees? Is it accessible, understandable, and aligned with how your people actually work?
Right now, many companies rely on technical controls and surface-level compliance checkboxes. But history shows that these alone don’t change behavior—people often click “I agree” without reading a word. Studies have shown that users spend mere seconds reading policy documents before accepting them. In one study, 97% of users agreed to terms that included absurd clauses like giving away their firstborn child—simply because they didn’t read them. Policies like these rarely lead to informed decision-making.
If we want policies to work in this new AI-accelerated world, we need a fundamental rethink: policies should be written for real comprehension, embedded into everyday workflows, and measured by actual behavioral outcomes—not just attestation rates.
AI risk reporting isn’t just about tooling—it’s cultural. If someone uploads a confidential document into ChatGPT, would they raise their hand or hide it? Governance requires psychological safety. This is so important, we made it a key dimension in both our Human Risk Baseline Assessment and our Cyber Culture Model—because enabling open reporting and trust is not just a soft skill; it’s a strategic advantage. It’s also a prime example of how culture factors can be made real, objective, and quantified—moving from theory to measurable reality.
We're running so hard towards human-in-the-loop, but we haven’t yet defined what the human’s job is in that loop. AI often creates business-critical outputs—but what constitutes an 'edit' versus an 'audit'? Are there guardrails in place that would catch when either the model or the human has veered off track? Would someone recognize the subtle signs of subterfuge? Are there processes to check for bias, error, or misalignment with regulatory frameworks? Is there clear ownership over decisions influenced—or even co-authored—by AI?
Risk doesn’t live in the algorithm—it lives in how humans use, trust, or circumvent it. The governance models we use must evolve beyond just technical oversight. They must account for the very real and very human ways people interact with, bend, or even break the intended purpose of AI systems. What happens when the human in the loop implicitly trusts a flawed output—or worse, doesn’t notice a subtle error or manipulation? Risk can emerge not from malevolence but from misplaced confidence, fatigue, or misunderstood instructions. That’s why governance has to be human-centered—because no matrix of checkboxes can anticipate the nuanced decisions, shortcuts, or rationalizations people make under pressure.
Do you have predefined steps for when AI tools malfunction, hallucinate, or get misused? And more importantly—what counts as a malfunction? What if the root cause is a human decision, a misconfiguration, or a subtle misuse rather than a system glitch? Have you mapped out the difference between model failure and user-driven risk? These blurred lines make remediation more complicated. Your plan needs to address not just technical errors but also behavioral triggers. It’s a critical gap—one we’ve seen before in phishing simulations where the focus is all stick and no learning. AI brings even higher stakes. Email is risky, yes—but it doesn’t write your strategy documents, process contracts, or draft client communications. So what’s the plan when things go off the rails? How will your organization learn, respond, and evolve?
Vendor AI usage is a blind spot in many risk registers. Are you asking the right questions about their LLMs, fine-tuning practices, data retention, and security hygiene? Start by mapping where and how vendors are embedding AI into their products and services—especially those touching your data or operations. Ask if they’ve assessed the security and ethical risks of their models, how often those assessments are updated, and what human oversight exists during fine-tuning. Have they put guardrails in place for hallucination or output manipulation? Are their employees allowed to use generative tools, and under what governance? The right questions aren’t just technical—they're operational, cultural, and contractual.
Annual awareness won’t cut it. Do you have ongoing, contextual, scenario-based education? Is AI literacy considered part of your upskilling strategy? Legacy cybersecurity awareness hasn’t evolved fast enough to meet the pace of workforce transformation or the AI-enabled future. To truly meet the moment, many organizations may need a complete overhaul—from how Human Risk Management teams are structured to the platforms used for delivering ongoing, personalized content. LMS platforms with a once-a-year phishing module aren’t enough. We need dynamic, role-relevant, just-in-time learning that scales with the complexity of today’s hybrid, AI-infused work environment.
You can’t manage what you don’t measure. Do you have KPIs for safe AI use, human-AI trust, or the prevalence of shadow AI? How often do you revisit these? There are many frameworks developing now around this topic, and our take is that they must include the human angle and culture. It’s not enough to check systems—measuring mindset, behavior, and trust patterns is just as critical. You need a balanced view of AI engagement and governance that blends technical audits with cultural diagnostics.
AI is a board-level risk. Are you equipping leadership with the language, data, and frameworks to grasp it fully—not just technically, but reputationally and ethically?
CISOs are being asked to play an outsized role in shaping the future of responsible AI. But this future won’t be built with tech alone. It requires cross-functional collaboration, cultural intelligence, and bold governance that bridges risk, behavior, and business strategy.
The AI risk landscape is still emerging—but your response must be mature, fast, and human-first.
At Cybermaniacs, we help leaders operationalize AI risk management through cultural diagnostics, behavior-aware governance, and continuous learning programs. Don’t just draft policies—build resilient systems of trust.
📩 Talk to our team about building your Human Centric AI risk roadmap. Follow us on LinkedIn to stay ahead of the curve.
Gift card phishing, though not a new tactic, continues to pose significant threats in the realm of cybersecurity. In this ongoing campaign,...
3 min read
The Rise of Fileless Malware
8 min read
Psst: CISOs and experts, this is one of our beginner-oriented articles! If you're looking for more advanced material, we recommend a dive into the...
3 min read
Subscribe to our newsletters for the latest news and insights.
Stay updated with best practices to enhance your workforce.
Get the latest on strategic risk for Executives and Managers.