The Business Fame Magazine is proud to feature Lisa Ventura MBE FCIIS, Chief Executive and Founder of the AI and Cyber Security Association, the world’s first global trade association dedicated to the convergence of artificial intelligence and cyber security. An accomplished author, writer, and keynote speaker, Lisa is set to publish her forthcoming book, Artificial Intelligence in Cybersecurity, with Kogan Page in Spring 2026. With a distinctive career journey that began in journalism and broadcasting—working alongside renowned media figures such as Chris Tarrant—Lisa brings the power of storytelling into the cyber security domain
Here is the Conversation :
1. For our readers, could you briefly introduce yourself and share what you do today?
I’m Lisa Ventura MBE FCIIS, I’m the Chief Executive and Founder of the AI and Cyber Security Association, the world’s first global trade association focused the convergence of AI and cyber security, and author of Artificial Intelligence in Cybersecurity which will be published by Kogan Page in Spring 2026. I’m also a writer and keynote speaker, and I work with various organisations on their security awareness and culture change programmes.
My professional journey is rather unique, I started in journalism and broadcasting, working alongside people like Chris Tarrant who was the first host of “Who Wants to be a Millionaire” before transitioning into cyber security. This background in storytelling fundamentally shapes how I approach cyber security awareness training and communications today. I work to humanise what can often feel like an intimidating technical field, making it accessible and inclusive for everyone.
I also run monthly #InfosecLunchHour networking events that prioritise neuroinclusive, psychologically safe spaces for cyber security professionals to connect under Chatham House Rules.
2. What inspired you to bring together cyber security awareness, AI, neurodiversity, and mental health in your work?
The traditional approach to cybersecurity awareness, the ‘human firewall’ narrative, was fundamentally failing people. We were treating humans as weaknesses to be fixed rather than individuals with diverse needs, cognitive styles, and psychological realities. I kept seeing security training that inadvertently traumatised people, created anxiety, or simply didn’t work for neurodivergent minds.
When I started working on a cyber awareness training programme with an organisation which spanned across eight European countries with different languages, cultures, and accessibility requirements, it became crystal clear that one-size-fits-all security training was not just ineffective, it was actively harmful. I began developing trauma-informed, accessibility-focused approaches that centred around psychological safety.
The results spoke for themselves. By incorporating neurodivergent-friendly design principles, micro-learning, and trauma-informed methodologies, we achieved a 90% reduction in phish-prone behaviours. This wasn’t just about better metrics, it was about respecting people’s dignity while protecting organisations. As AI increasingly mediates our security interactions, ensuring these systems are inclusive and ethical from the ground up isn’t optional, it’s essential.
3. Why do you believe human behaviour remains the most important factor in cybersecurity awareness?
Because technology doesn’t make decisions, people do. Every security breach, every data leak, every successful phishing attack ultimately involves a human decision point. But here’s the critical shift in thinking: instead of blaming individuals for these decisions, we need to understand the psychological, cognitive, and environmental factors that influence them.
Security awareness isn’t about making people feel guilty or afraid. It’s about equipping them with knowledge and tools that work with their natural cognitive patterns, not against them. Someone who’s neurodivergent might process information differently. Someone under stress might miss warning signs they’d normally catch. Someone working across time zones might be more vulnerable when fatigued.
When we move away from the shame-based ‘human firewall’ model and toward psychological safety and inclusive design, we create environments where people feel empowered to report mistakes, ask questions, and make security part of their natural workflow. That’s when human behaviour becomes our greatest strength, not our supposed weakness.
4. As AI adoption accelerates, what ethical or inclusivity challenges should leaders be most aware of?
The most pressing challenge is that AI systems inherit and often amplify existing biases and exclusions. If your security training data primarily reflects neurotypical, able-bodied, English-speaking users, your AI-powered security tools will fail entire populations.
Leaders need to ask uncomfortable questions: Who is designing these AI systems? Whose experiences are centred in their development? What happens to users who don’t fit the assumed ‘normal’ profile? AI-driven security awareness platforms that rely on rapid-fire decision-making might be completely inaccessible to users with processing differences or anxiety disorders.
There’s also the critical issue of transparency and consent. Users need to understand when they’re interacting with AI, what data is being collected about their behaviour, and how those insights are used. The surveillance capabilities of AI security tools can create hostile work environments if not implemented with genuine care for employee wellbeing.
Finally, leaders must recognise that AI is not neutral. Every algorithmic decision reflects choices made by its creators. Building ethical, inclusive AI-security convergence requires diverse teams, ongoing accessibility audits, and a genuine commitment to human dignity over efficiency metrics.
5. What is one practical action organisations can take to create genuinely neuroinclusive workplaces?
Stop assuming everyone thinks, communicates, and processes information the same way, and actively design for diversity from the start, not as an afterthought.
Practically, this means offering multiple formats for all critical communications and training. Don’t just deliver a 60-minute video training session and assume everyone absorbed it equally. Provide written transcripts, visual aids, hands-on simulations, and micro-learning modules. Let people engage with content at their own pace, revisit materials, and choose the format that works for their cognitive style.
In security specifically, this might look like offering phishing simulations with clear explanations rather than shame-based ‘gotcha’ moments, providing quiet spaces for security briefings rather than only high-stimulus environments, and allowing asynchronous participation in security discussions for those who need processing time.
But perhaps most importantly: listen to neurodivergent employees. They’re the experts on their own experiences. Create feedback mechanisms, involve them in policy development, and actually implement their suggestions. Genuine neuroinclusion isn’t about perfect policies, it’s about continuous learning, adaptation, and centring the voices of those most affected.
6. Looking ahead, what change do you most hope to see in the global cyber and neurodiversity landscape?
I hope to see us collectively abandon the ‘human firewall’ mindset and embrace a future where psychological safety is recognized as a cornerstone of effective cyber security. This means moving beyond viewing neurodivergent individuals as accommodation cases and recognising that designing for cognitive diversity from the outset creates better security for everyone.
I’d love to see AI-security convergence developed with ethics and inclusion at its core, and not bolted on later. This requires diverse teams building these systems, including neurodivergent professionals whose insights are invaluable for creating truly accessible security technology.
On a broader cultural level, I hope we can create a cyber security profession that welcomes and celebrates different ways of thinking rather than demanding conformity to neurotypical norms. The pattern recognition abilities often associated with autism, the hyperfocus capabilities of ADHD, the creative problem-solving of dyslexic thinkers, because these aren’t deficits. They’re precisely the diverse cognitive strengths our field desperately needs.
Through the AI and Cyber Security Association and my work developing neuroinclusive cyber security communities, I’m working toward this future every day. It’s not just about making our field more ethical, though that’s crucial of course. It’s about unlocking the full potential of all the brilliant minds we’ve been systematically excluding.








